This is Jensen Huang, the CEO of NVIDIA. Currently the second most valued company in the world. He's wearing a $9000 leather jacket. Nvidia provides the hardware for AI.
Sam Altman gives me socialist vibes. This is exaggerated to make you think, "what the fuck," but in almost every podcast I watch, he constantly brings up UBI. Why does the person at the forefront of this space care so much about UBI? I think he understands human capability and realizes how powerful this technology is.
A Shift in My Timeline
I've had a few discussions about the future of humanity with AI and I personally thought if I didn't make it in 10 years I will forever be a worker in the cog. However, I believe my timeline has shifted a lot more.
The capabilities of AI are a lot scarier than people say. Most people judge AI based off consumer models like ChatGPT-4o but they forget that's a $20/mo solution and there are companies who are pumping billions into AI development. Not $20/mo.
Most people who disregard AI just are way too lazy. They don't care enough to spend the time to understand the technology. Fundamentally Artificial Intelligence is fundamentally built on trying to recreate human intelligence. It was and being built by the most intelligent humans all in a room working on how to copy and paste their brains, but do a better job. The top 1% of smartest people are all working on a tool that is smarter than themselves because it is taking the best of the each of them and combining it into one.
A Simple Breakdown of AI
Most people who disregard AI should just spend two seconds to think about that above paragraph before even bothering to understand how neural networks and vector databases work. You most likely aren't in the top 1% of thinkers, you are most likely like the rest of us. Average. How in your right mind are you not worried.
Now here's a simple breakdown of the science of AI. All it is right now is just a system that takes a lot of information, learns from it, and then makes decisions or predictions based on that data. Imagine it like a super smart librarian who has read every book in the world and can use that knowledge to answer questions, write essays, or even create art. AI uses neural networks, which are like digital brains, to process this information. These networks mimic the way human brains work, with layers of nodes that communicate with each other to solve problems. The more data they get, the smarter they become. It's like giving our librarian more books to read and more practice answering questions, only these "books" are vast amounts of digital information, and the "practice" is constant learning and improving from that data.
The thing is even with the current science it is already good enough to solve majority of problems. Understand how fast ChatGPT can spit out an answer and how much compute you are allocated on your own as a user. Imagine what would be possible if those tools can answer much slower and spend more time to think. If they had more brain power.
The Implications of Project GR00T
The playing field will definitely change exponentially as they gain more computer, a bigger and more powerful brain. There will be a point where the US stops relying on TSMC for chips, simply due to the amount of resources allocated to chip development in the private US sector. The second biggest shift that will impact AI will be when we have open world realistic physics environments. I originally thought this would be in development more down the road, but I just watched this video by Nvidia. It was released two months ago:
What this video is showcasing is that Nvidia is already working on virtual environments to train robots, Project GR00T. The reason why Project GR00T is scary is simple: imagine the implications of this technology. Picture a Tesla Gigafactory being simulated virtually 500,000 times until it is the perfect Gigafactory. Every potential issue is ironed out, every inefficiency optimized, and every component perfectly aligned. Then, once this flawless virtual factory is achieved, you go and build it in the real world.
This process means that by the time the physical factory is constructed, it's already the pinnacle of efficiency and productivity, with no room for error. The time and cost savings are immense, but so is the potential for disruption. Human trial and error, which used to be a part of innovation and learning, is effectively eliminated. Robots and AI don't need breaks, they don't get tired, and they don't make mistakes the way humans do. The competitive edge they offer is unparalleled.
With technologies like Project GR00T, the pace of innovation and the scale at which it can occur become almost unimaginable. The power to perfect complex systems virtually and then deploy them flawlessly in the real world accelerates the capabilities of AI and robotics beyond our current comprehension. This isn’t just about making better factories; it’s about the potential for AI to outpace human capabilities in ways we haven’t even begun to fully understand.
So, when we consider the future with AI, it's not just about what these systems can do now, but about what they will be able to do very soon. And that, frankly, should make us all pause and think about the world we are creating.