Computational Engineering today, AI Engineering tomorrow, and when do we get J.A.R.V.I.S.?
By Lin Kayser
Published March 29, 2023

The last couple of weeks were interesting, to say the least.

But these weeks were a good moment to reflect on where we stand in the whole field of Computational Engineering and Digital Manufacturing, and how much more work is there, to get anywhere near the vision, that my co-founder Michael Gallo and I had, when we started on our journey with Hyperganic.

Building Hyperganic

After Michael and I sold our last company, I left Adobe at the beginning of 2015 to embark on a new venture. I had gotten fascinated by the potential of Industrial 3D printing, Additive Manufacturing, to change how we think about building physical objects, entire machines, in general. A year later Michael and I had settled on a name for the new company: Hyperganic.

The organic world, Nature, builds objects additively and algorithmically, by adding tiny particles, by growing things. What if we humans purposefully, intentionally, could do the same? Could we go beyond Nature and build hyper-organic, Hyperganic machines. We articulated this in a tongue-in-cheek mission trailer (even before we actually incorporated) that I recently re-shared on Twitter:

What do you need to build objects that approach the complexity of organic objects?

None of the Computer Aided Drawing (CAD) systems and geometry kernels out there were built for this. They were created for the world of simple, human-drawn objects. They aid in the drawing of blueprints. These systems are descendants of pen and paper. CAD is a visual, human-driven process, that limits the complexity of the resulting objects through the mind-numbing labor an engineer has to put into sketching all the details.

In a world of algorithm-driven design, CAD falls apart. So Michael and I got to work and designed a geometry kernel for Computational Engineering, with a focus on Additive Manufacturing. This kernel allows us to design every object that a 3D printer can output, it allows the precise placement of each particle that makes up a part, structure, or eventually, an entire machine. It uses the concept of a Voxel, a three-dimensional pixel, to represent geometry. The complexity and sophistication of the object is no longer limited by the geometry representation used.

If you ever wondered, how Josefine can create things, that you have never seen before, it’s because she literally paints matter into space algorithmically and doesn’t ever have to worry about whether the geometry gets too complex for the mathematical model underlying the CAD system.

The Advent of Computational Engineering

The first major public object that we showed was the iconic Hyperganic rocket engine, that you have seen in many publications. It also was the centerpiece of the 2018 TEDx Talk I gave, that introduced the concept of algorithm-driven Computational Engineering.

Fast forward to 2023, and you can see the most spectacular result of Computational Engineering so far, Josefine’s Aerospike, which has been called a historic milestone in 3D Printing.

Aerospike engine printed on AMCM M4K machine

So how, does Computational Engineering work, and why is it such a departure from traditional engineering approaches?

In Computational Engineering, you design an algorithmic model of a real-world object. You do not sketch a device visually that you already have in your mind. Instead, you work conceptionally by dissecting the functionality into logical blocks, into information flows, into data interfaces.

Object oriented programming languages are ideal for that. A conventional engineer works purely graphically, based on the visual knowledge of how things are supposed to look like. In contrast, a Computational Engineer starts with the logical composition of the functional object, and with the capabilities and limitations of the desired manufacturing process.

Only then, will the engineer come up with the actual design and modify coding blocks to get the desired geometry. By plugging in physical formula and building feedback loops, based on simulation, or actual testing, the algorithms can explore a vast number of variants. And even simple code can produce complex results, for example to increase the surface area of a heat exchanger.

The end results are objects that look alien, like from another world, organic, but with human intent, in other words, Hyperganic.

So, here we are, in 2023. Using computational methods and with the right geometry representation, voxels, you can design the world’s most complex machines.

Engineering under Moore’s Law

You can now move engineering under Moore’s Law. Under this paradigm, your solution will always use the best computational models available. Redesigns can take seconds or minutes, and knowledge about how to create physical objects, can spread as abstract algorithms. It is no longer locked in the brains of a few engineers.

Any company that applies Computational Engineering consistently, will radically outperform any other company that still tries to draw things by hand.

I talked about this extensively at the 2017 Singularity University summit in Berlin

There is no way, a human can outperform a computer. Just like nobody using a mechanical calculator can beat someone who uses Excel. More importantly, an engineer using computational methods will design things that they could have never dreamed of before.

And we will build and improve upon the previous solutions, instead of reinventing the wheel all the time.

This “standing on the shoulders of giants” is what makes Moore’s Law work. We have all seen it in action in the last decades of the computing revolution. But the trend is arguably much older.

Engineering and Generative AI

So, where do we go from here. While it’s clear that algorithms executed by a computer will outperform any human, it still requires work to create all the code. In the recent buzz around Generative AI, and their deep learning models, many have started to suggest, that the profession of the engineer might end soon, as we will just tell an AI to design us a jet engine, and magically a blueprint will appear.

Midjourney prompt: “Blueprint for the Lockheed SR-71 jet engine

Unfortunately, it’s not that simple.

Today’s neural nets are trained on enormous data sets which allow them to convincingly synthesize new output. Some of these results are truly spectacular. ChatGPT would probably do a great job rewriting this text (why do I even bother!), and many of the results from tools like Midjourney are stunning.

But the images from Midjourney are just that, images. They are not 3D objects, let alone machines. And there is simply no large enough data set, that an AI engineer could train on. We have lots of pictures out there, and loads of text, but not much else.

Aren’t pictures enough, if you had enough of them?

Even if a million untrained hunter gatherers looked at a billion pictures of machines, they would not understand the basic engineering principles that went into them. Maybe they could draw sketches that resembled them. But to engineer an aircraft is a different task than to create a painting of it.

Case in point, Midjourney looked at billions of pictures, but it produces things that make no logical sense.

I asked Midjourney to give me a 1980s digital watch (these things were cool at some point…).

And at first glance, these are convincing objects.

Looking great, how hard can it be to go from here to a functional engineering design? Quite hard, actually.

Just by looking at these images closely, you can see that there is no understanding of the intent of the shown functionality. The digital display, for example, makes absolutely no sense. Even though all the digital watches out there have the same kind of display (4 or 6 digits, grouped in two for hours, minutes and seconds, with 7 segments individually addressable to create the numbers 0…9), Midjourney completely fails at this test, as it has no concept of this device telling the time. And please don’t get me wrong — over time, Generative AI will create convincing images without flaws, even 3D models of them (through tools like photogrammetry).

But building a truly working mechanism is something altogether different.

And this is not a trivial problem. It is fundamental to the challenge we are trying to solve: Creating functioning machines through Generative AI.

Building a Generative AI for Engineering

Before we can create a Generative AI for engineering, we first have to find a data set for the neural nets to train on.

It simply doesn’t exist.

What should a neural net train on?

Actual blueprints? They depict the way an object should be fabricated — but not how it will function. Patent filings? Even human engineers, with all their background knowledge, have trouble understanding those. But they could be an interesting source, when augmented with other data.

3D models of machines? Not many of these are out there, and they, again, don’t give us any insight, why something was designed like that and how it works? What moves, what doesn’t. What forces are applied where? There is no standard for this.

The only field where we currently have logic — information flow, intent, geometry, infinite variations — is in Computational Engineering. We have the code that describes inputs, outputs, the reason why something looks a certain way in one case, and totally different in another. We have all that in the form of the source code that describes the computational model.

Language models like ChatGPT can already write computer code. Some of this code, today, is hilariously non-functional, but, again, it’s only a question of time, until the language models have been trained on enough GitHub repositories to get really good at this.

What if we specifically trained a ChatGPT-like language model on our engineering code and moved to a conversational model for engineering?

This will not free us from having to encode our engineering knowledge in abstract form, by writing code once, twice, a few times for different engineering areas. But then we can likely move to a system, where an engineer explains a problem, and the AI starts writing the code that implements the solution. At first this will happen in a similar way the “co-pilots” work in many other fields, like text editing. Interestingly, if an AI writes code, the code can also be read and interpreted by a human, corrected, improved upon and discussed. This is in contrast to a lot of generative models, where you get an output without any way of knowing why the machine arrived at it.

Increasingly this stuff may look like J.A.R.V.I.S., which interestingly enough, is always a discussion between the creator (Tony Stark) and the machine.

Where to now?

Obviously, looking into how we can use AI write the code for complex engineering solutions, is a field, I am highly interested in. I will continue down this path, trying to inch closer to the original vision I had for the future of engineering:

Building machines as complex as Nature.