Must We Musk to Understand the Universe?

Today Elon Musk announced the formation of xAI. “The goal of X.AI is to understand the true nature of the universe.” The company has set up a meet-and-greet of the new team during a Twitter Spaces chat on Friday, July 14.

I have been experimenting with ChatGPT and just upgraded to ChatGPT-4. I have been charmed by the speed in which answers are generated. Some results have been spot-on, while others have hit random, far-reaching bumps in the ether. My searches and requests haven’t been in the realm, however, where woke culture intercedes or pops up so I haven’t yet experienced the “destruction” of civilization that Musk has been warning about with unregulated AI that includes woke intrusions alongside blatant hallucinations.

To counter-act this culture Musk claims has invaded OpenAI (a company he co-founded), he and colleagues are pushing out a new company called xAI to rival—through truth-telling— OpenAI and other presumptuous AI companies. According to Reuters today, Musk registered a firm named X.AI Corp, incorporated in Nevada, in March of this year and made the public aware of the company today.

My questions then become “whose truth are we telling through AI” and “what truths will be told.” Back in 2019,  Elon Musk urged as the “utility function” of AI technology, “… a close coupling between collective human intelligence and digital intelligence”  to “maximize the freedom of action of humanity” (Elon Musk on Artificial Intelligence 2019). I really liked that idea back then when he said it and I still do now.

Perhaps, however, my idea of freedom of action is different than what Musk intended by his observation. I do believe, as Musk does, that “There is a risk that advanced AI either eliminates or constrains humanity’s growth.” I just think that setting up a company claiming to “tell the truth of reality” is already putting up another infused-with-bias edifice of error. [See footnote 1 below.]

I’m of the opinion that the AI cruise ship has sailed, we are all inside it as if on a big floating petri dish of humanity, and it will take all of us as we work through the ins and outs of AI to steer it. I agree with Mo Gawdat in the introduction of his book Scary Smart where he talks about the new superheroes for AI: “It is not the experts who have the capability to alleviate the threat facing humanity as a result of the emergence of superintelligence. No, it is you and I who have that power. More importantly, it is you and I who have that responsibility.”

We ALL have to step up to the plate and profess our truths about our universe and work WITH Musk and the other AI players, not be popcorn popping wilters on the sidelines.


FOOTNOTE 1: According to ChatGPT-3.5 (I used also ChatGPT-4, but liked this result 3.5 result better) the term “edifice of error” typically refers to a complex or elaborate structure built upon flawed or incorrect assumptions, beliefs, or principles. It suggests that the foundation or basis of the structure is flawed, leading to subsequent errors or falsehoods that are built upon it.

ADDENDUM FOR JULY 29: I’ve since signed the Pause Giant AI Experiments: An Open Letter so I can be in the loop (although this letter did come out a few months ago in March of this year). I’m now trying to find evidence of the work being done by those claiming to sit on the pause button. Musk certainly is not pausing within the A.I. industry, yet he is the third signature on the open letter.

Sam Altman’s AI conversation and ChatGPT’s answer to my question

I’ve started doing some research on Artificial Intelligence. I’m mainly interested in its application to support learning for the millions of potential “students” who find themselves outside of a formal educational system either because of lack of opportunity or poverty. The idea would be that personalized learning scaffolding could be built through AI to fit the educational trajectory that a student needs to take. My desire to start down that AI road began before OpenAI’s Chat GPT came to the forefront.

As OpenAI came into my purview, I listened to an interview of Sam Altman, the CEO of OpenAI, to see if I could pick up any clues as to what might be coming down the AI pipe. Altman said that AI is the biggest technological platform of our generation—a legitimate revolution. He said that with AI, the marginal costs of intelligence and energy are trending towards zero. AI will impact ALL computing and will quicken iteration cycles. He said we will see an explosion of new companies that will use the large language model as a service.

The main societal issue at the beginning of the AI revolution will be disruption of economic activity. He suggested that we will need to have a new social contract on how we think about fairly distributing wealth, who will get access to AGI systems, and how governance will work. I think the AI disruption will also affect how we think about education, how it is distributed, what even is the nature of education. What will it be that we need to learn? Who will determine what we need to learn?

In the spirit of OpenAI, I decided to ask ChatGPT “What are the best ways in which AI can support learning?” I was surprised at its ability to immediately inform me of five different ways articulated in succinct terms. Here is what it gave back.

AI can support learning in several ways, including:

  1. Personalized Learning: AI can personalize learning experiences for individual students, using algorithms to adapt to their needs and progress. This can help students learn at their own pace, with content and activities that are tailored to their specific learning style and level.
  2. Intelligent Tutoring Systems: Intelligent tutoring systems can provide real-time feedback to students, helping them understand and learn from mistakes, and improve their skills. These systems can also track student progress and suggest activities or resources to help them improve.
  3. Automated Grading: AI-powered tools can help automate grading and provide more consistent and objective evaluations of student work. This can save teachers time and provide students with faster feedback on their progress.
  4. Adaptive Testing: Adaptive testing systems can use AI algorithms to dynamically adjust the difficulty level of questions based on the student’s performance. This can help identify knowledge gaps and provide more accurate assessments of student learning.
  5. Content Creation: AI can also be used to create educational content, such as videos or interactive simulations, that is tailored to specific learning objectives and student needs.

By incorporating AI into the learning process, students can benefit from more personalized and adaptive learning experiences, faster and more consistent feedback, and access to high-quality educational resources.