My Why (for AI Makerspace)
A Not-So-Brief History, or My AI Engineering Evolution
Set Up
I recently revamped our website at aimakerspace.io for the next chapter. Though it is quite simple now (and could use a lot more vibe-coding upgrades, for sure), my focus wasn’t on flash, it was on clarity of messaging (and to some extent, branding).
I took the opportunity to make updates to the way I talk about my why, and to prune away a layer or two of ambiguity so that it was clearer not only to me, but also for our future customers, community members, and future members of our team.
At the same time, the company has been going through major shake-ups throughout 2025 - from personnel to a shifting high-level strategy - all of which appear to finally be settling into the next phase. Constant adaptation has been required, or so it feels to me steering the ship. I don’t view 2026 like this though, and for that I am grateful.
During my revamp of the website, I really wanted to do a proper “about us” and “team” page (I opted to roll them into one for now). As I considered exactly what I wanted to greet our potential future customers and community members with, I remembered part of the foreword that I had written for The AI Engineering Bootcamp Book. I had written the document right around the turn of the year, 2025.
Aside: While we originally planned to publish the book in 2025, it looks now as if we’ll need to bring on some ghostwriting support to get it out in 2026. I will make sure that I maintain skin in the game on the process of producing something that both Chris and I are proud to put our names on, and that is worthy of carrying the AI Makerspace brand forward and of donning the epithet by which AIM is often known: Build 🏗️. Ship 🚢. Share 🚀.
Anyways, the foreword is likely to change, as are the times, and updates to my personal story and journey along with all of the above. Of course, the website will as well. While I’ve shared My Story there it, too, will likely update with time.
As for now, here, today, I’d like to share the journey update (with minor modifications from the webpage) as of January 2, 2025, with the caveat that I know many chapters are left to be written. Perhaps I can even give a 2025 next chapter update this holiday season 🤓.
May the internet time stamp this one for me - as a symbol of AI Makerspace becoming AIM v0.2 🙏
My Story
My Why
For 10 years teaching at university while working from industry to academia to government contracting, I saw first-hand that the only students who were succeeding in a 21st-century workforce were those who were:
📚 Learning new skills
🏗 Using those new skills to build awesome things
🚀 Sharing the awesome things they built with others
It wasn’t Ph.D. degrees, prestige, or pedigree—it was making stuff as part of a community.
It was 📚 learning, 🏗️ building, and 🚀 sharing, over and over, that was making people successful over time.
And there is no way to sustain doing this unless it’s something you love.
When I used to teach manufacturing courses, there were always a few students who truly loved the work. But only a few.
It turns out that it’s that way in every domain.
Including building software.
If you’re called to become an AI Engineer and build the future with us, with software today that is likely connected to hardware tomorrow, then welcome.
It’s no longer enough to just build and ship your code. You’ve got to communicate how awesome it is to your customers and stakeholders, because otherwise they won’t understand.
You must share 🚀.
This is our ethos, and the secret to our success. It’s what we teach people and companies to use as their guide, their North star, when creating a path to success in the age of AI.
Build 🏗️ • Ship 🚢 • Share 🚀
A Brief History
2015 Finished my Ph.D. in Engineering (Focus: Computational Design & Optimization) and immediately began learning Machine Learning (thanks, Deeplearning.ai).
2020 Went all-in on remote teaching as a Professor of Practice in Mechanical Engineering at the University of Dayton—atoms weren’t ready, but bits were.
2021 Moved to Silicon Valley, built ML Engineer & MLOps programs at FourthBrain (Andrew Ng’s AI Fund), and ran online events with Deeplearning.ai—education can’t be measured only in ARR.
2023 After moving back to Ohio, co-founded AI Makerspace with Chris “🪄 The Wiz” Alexiuk and began weekly YouTube sessions—creating a place for people who love staying at The LLM Edge as much as we do.
2025 Now we’re growing steadily and making an impact worldwide.
A Not-So-Brief History, or My AI Engineering Evolution
**This is my story. While there is some written, it is, of course, still being written. May this glimpse into my path be useful to you getting your bearings on yours in this new age of AI.*
--
The Wiz often reminds me that if I’m just now hearing about a new idea in AI, it has finally reached the level of public consciousness.
This has been true for my entire career.
In 2015 I graduated with my PhD in Engineering. My focus area was technically Computational Design and Optimization. I studied quite a lot of gradient descent and matrix algebra. At the time, if you’d asked me, I would’ve told you I knew nothing about Machine Learning.
You see I went to school in Ohio, a part of the country where “engineering” is more likely to mean mechanical or electrical engineering than software or computer science.
Machine Learning
In 2016 I led a NASA project entitled Empirical Optimization of Additive Manufacturing [Ref]. The idea was to use sensor data directly (empirical data) instead of physics-based simulations to optimize the microstructure and properties of 3D-printed metal.
“Will you leverage Machine Learning for the empirical optimization studies?” asked our NASA sponsor.
The only correct answer was, of course, “Yes, as soon as I figure out exactly what machine learning is, I’ll let you know during the next update!”
You see, I was not in buzzword compliance. I spoke the language of empirical optimization, while the world started to speak the language of Machine Learning and Deep Learning.
Something about ML, Deep Learning, and AI was bubbling up to the surface of public consciousness.
I started asking grad school programmer friends how to learn this new important thing that no one taught me during school. I was quickly led to Deeplearning.ai’s Machine Learning Specialization and Deep Learning Specialization. I started learning Python, applying modeling techniques to our data, and teaching our sponsor about ML.
From 2011 to 2015, I was too busy in my graduate studies focusing on optimization to hear about the machine learning specialization! Classic grad school.
That was over, now though. I was all in on Machine Learning.
Data Science
Fast forward a few more years. By 2018, “Data Science” was in vogue, and I was in a classic predicament
I had a PhD, I wasn’t (and am still not) a software engineer by trade, but I wanted to be a data scientist. The job title was, by any measure, very sexy.
Interestingly, it was in fact 2012 when both the Machine Learning Specialization and the article “Data Scientist: The Sexiest Job of the 21st Century [Ref]” were created.
It wasn’t until 2018 that all of this had bubbled up to the surface level of public consciousness, at least enough for me to hear about it, and I was officially all in on “Data Science”.
There was an important shift in the branding of ML at the time as well; people started talking a lot more about “AI” than about “Machine Learning.”
This continues to this day.
As I did data science (e.g., AI) work for real companies, I found that it was much harder to make money for clients with the tools and techniques out there that existed than it was to do research with them.
I felt I needed to move away from R&D and towards applications. The unicorn data scientists out there seemed to understand this. They built products that were powered by AI.
I was picking up on a vibe shift from data science to real value.
This was perhaps a bit of an early trend; one that came to fruition during and after COVID when companies realized that all the data scientists they hired actually weren’t making their companies much (if any!) money at all.
I decided to go all in on learning how to think about AI “end-to-end.”
AI & Product Management
I’ve been all in on AI ever since, obsessed with figuring out what “AI” actually is and means, and (perhaps more importantly) how to find the ways it should be used to create real value during each step of its evolution.
This led, naturally, to products.
When building any product, you should always be able to articulate the problem, why it’s a problem, who it’s a problem for, and what you would measure to know if you solved it. While this is still table stakes in product management, when we add AI in, we’re adding it to the final step where we decide what a solution looks like.
With a background in research, I knew only enough about product management to be dangerous. Moreover, in Ohio, the word “product” to me circa 2020 still indicated a physical thing that you hold in your hand.
By 2021, I acted out how “all in” I was on AI by moving to Silicon Valley. I needed to build products with people at the AI and startup edge.
Machine Learning Engineering
You might remember that by 2022, you had to be behind the curve to aim at becoming a Data Scientist. What the industry really wanted was Machine Learning Engineers; that is, people who could look end-to-end and connect data science to Machine Learning Operations (MLOps).
MLE was the new sexiest title in the industry. MLOps got super hot thereafter. I was fortunate to find myself within Andrew Ng’s AI Fund and working on building boot camps for ML Engineers and MLOps practitioners.
I remember trying to decide on the curriculum - there were so many questions!
How much modeling should we include in the course?
How much basic engineering do we need to teach?
What’s the difference between MLOps and DevOps?
At the time experiment tracking was all the rage, and Data Science departments around the world seemed to be more focused on process and efficiency than ever before. It was also a time when autoML was getting a lot of attention, and overall the trend was heading towards data science modeling activities becoming less valuable and more likely to be automated in the future.
This shift is really where data science started to move towards engineering, for good.
What was clear is that there was a hunger for MLE in the market, and it was a buzzword for folks to get out ahead of, whether on their resume, LinkedIn profile, or (for hiring managers) through their job descriptions.
Being “all in on end-to-end AI” really meant trying to learn the right aspects of machine learning, software development engineering and operations, and AI product management and business.
Now it was about finding the right problems to solve and then using the right technology to solve them.
Historically, most of my time as an AI/Data Science/MLE/MLOps consultant was spent helping clients understand how AI can be used in their business.
This is still true today.
But somehow, it’s become ever-so-slightly easier as the industry has matured.
ChatGPT, Generative AI, and LLMs
When ChatGPT came out, people started talking about Generative AI.
There were a few pieces of wisdom I remember not being able to shake as fundamental business realities.
Of all the ways that companies can create value for their customers, most of those ways don’t require AI
Of all the ways that do require AI, most of them don’t require Generative AI or Large Language Models (LLMs)
From 2018-2022, we didn’t have to consider LLMs, and it felt much less intuitive for the average person to think about ways to imagine users leveraging AI in their business.
The barrier to entry to be able to engage an ML consultant was much higher than it is today.
It was harder to explain to clients how to think about using it. It was also harder to implement custom ML and AI solutions. These two things have shifted to change the playing field.
Post-ChatGPT, it’s easier to explain to clients how to think about using AI. “Let’s start by thinking about a chatbot interface.”
Post-Large Language Model Meta AI (LLaMA), it’s easier to pull open-source commercial-off-the-shelf LLMs down to customize them to customer’s use cases.
As the models have improved post-ChatGPT, so has the infrastructure for building end-to-end. From hosting models to hosting entire production-grade end-to-end applications, the tooling to build LLM-powered applications that leverage context and reasoning has made custom back-end software development even easier.
Wiz likes to say these days “I’m not a front-end developer, but Claude is.”
It’s never been easier to build end-to-end.
It’s never been easier to think about what we should build end-to-end and why.
The Age of AI Engineering
Before data science, every company had R&D and engineering. They had no additional “science” department.
The path from empirical optimization to machine learning, deep learning, data science, AI product management, machine learning engineering, MLOps, and through to today’s LLMs can teach us a lot about what to expect.
In 2023, when we founded AI Makerspace, you might not be surprised to hear that our first course to launch was LLM Ops [Ref]. Our second course was LLM Engineering [Ref].
Our third course was actually a combination of the first two, and we rebranded. We realized that in order to put LLMs “in production” (that is, inside of products) we needed the same foundational pieces as before.
A new job title had started to gain traction in the industry. It signaled a shift in how we used to think about AI being a custom job. In the world of 2024, AI felt, well, mature.
It feels today like we’re no longer grasping in the dark looking for business value. We can start simply with internal chatbots, and go from there.
There is no longer an infinite well of insights and model experimenting that we need to do before we can start assembling end-to-end pieces of a larger puzzle and getting the product into the hands of customers or internal stakeholders.
The engineers started to realize that rather than pick up a new programming language, it was probably a better idea to start learning AI.
The data scientists started to realize, “Wow, if I don’t learn engineering, I need to go fully back to research.”
The models are better now, at literally all tasks. They’re so good that we can pull black boxes off the shelf and use them. We can send them a request and receive a response back. We can and should use models as APIs now. This is especially true of LLMs.
This fundamental shift shoved data scientists outside of the API, where the developers and (software) engineers have always lived.
The AI Engineering Bootcamp
As 2024 kicked off, we noticed this maturity and this shift. With its long and storied history, AI had finally properly linked up with Engineering (in the Silicon Valley sense e.g., Software Engineering).
We took the most business-relevant aspects of our LLM Ops and LLM Engineering courses to build Cohort 1 of The AI Engineering Bootcamp [Ref].
We built the course to address some practical realities for aspiring AI Engineers and leaders today. For example,
The Jupyter Notebook is no longer where you can do most of your job. However, the best AI engineers are fantastic with notebooks.
Machine Learning modeling is no longer highly valued. However, the best AI engineers can fine-tune and align models.
Data Science managers today who came up in a different paradigm have never had hands-on practice with the tooling, techniques, and best practices for building production LLM applications. They need to get back into the code before they can lead.
Given baseline knowledge of AI, it’s never been easier to become an engineer. Given baseline knowledge of software engineering, it’s never been easier to learn to build with AI.
AI has become table stakes.
Engineering (at least prototyping) is becoming table stakes.
Understanding what to build and why is becoming less of a “unicorn” quality and more like something businesses expect.
There are still R&D labs and engineering departments. There are still data science departments and “Centers of Excellence” that companies are trying to figure out what to do with.
The people and companies applying data science to real problems and building those solutions into their products (e.g., in production), they’re the ones who are doing AI Engineering today.
The Future of AI Engineering
You no longer need a PhD to do AI. However, you do need to grok papers like Attention Is All You Need [Ref], written by a bunch of PhDs in 2017.
The levels of abstraction keep increasing, as does the prerequisite knowledge required for entry into the field.
There are core patterns that you’ll need to master to become proficient at AI Engineering including Prompt Engineering, Retrieval Augmented Generation (RAG), and Agents. We’ll cover these in this book!
In the future, we’re likely to see AI Engineering teams manage tens, hundreds, or even thousands of custom models. In this new paradigm, we’ll see a focus on post-training practices like fine-tuning and alignment. In short, we’ll see a return to data science R&D! However, to balance performance and cost properly, the people building these models will need the skills of a data scientist and a proper software engineer.
The future is bright for people and companies who decide to go all in on AI Engineering today.
There has never been a better time to round out your skill sets to make yourself indispensable.
It has never been easier to build, ship, and share production AI applications that create real value.
At the end of the day, that’s all AI Engineering really is!
In other words, AI Engineering has finally reached a level of public consciousness where you should really be paying attention.
Join the evolution.
May this book be a useful onramp to the next phase of your career and a reliable guide in your journey.
Dr. Greg
January 2, 2025


This piece really made me think. 'Constant adaptation' is so relatabel; who needs 2025 again?
the long game, my friend 🤝