Must We Musk to Understand the Universe?

Today Elon Musk announced the formation of xAI. “The goal of X.AI is to understand the true nature of the universe.” The company has set up a meet-and-greet of the new team during a Twitter Spaces chat on Friday, July 14.

I have been experimenting with ChatGPT and just upgraded to ChatGPT-4. I have been charmed by the speed in which answers are generated. Some results have been spot-on, while others have hit random, far-reaching bumps in the ether. My searches and requests haven’t been in the realm, however, where woke culture intercedes or pops up so I haven’t yet experienced the “destruction” of civilization that Musk has been warning about with unregulated AI that includes woke intrusions alongside blatant hallucinations.

To counter-act this culture Musk claims has invaded OpenAI (a company he co-founded), he and colleagues are pushing out a new company called xAI to rival—through truth-telling— OpenAI and other presumptuous AI companies. According to Reuters today, Musk registered a firm named X.AI Corp, incorporated in Nevada, in March of this year and made the public aware of the company today.

My questions then become “whose truth are we telling through AI” and “what truths will be told.” Back in 2019,  Elon Musk urged as the “utility function” of AI technology, “… a close coupling between collective human intelligence and digital intelligence”  to “maximize the freedom of action of humanity” (Elon Musk on Artificial Intelligence 2019). I really liked that idea back then when he said it and I still do now.

Perhaps, however, my idea of freedom of action is different than what Musk intended by his observation. I do believe, as Musk does, that “There is a risk that advanced AI either eliminates or constrains humanity’s growth.” I just think that setting up a company claiming to “tell the truth of reality” is already putting up another infused-with-bias edifice of error. [See footnote 1 below.]

I’m of the opinion that the AI cruise ship has sailed, we are all inside it as if on a big floating petri dish of humanity, and it will take all of us as we work through the ins and outs of AI to steer it. I agree with Mo Gawdat in the introduction of his book Scary Smart where he talks about the new superheroes for AI: “It is not the experts who have the capability to alleviate the threat facing humanity as a result of the emergence of superintelligence. No, it is you and I who have that power. More importantly, it is you and I who have that responsibility.”

We ALL have to step up to the plate and profess our truths about our universe and work WITH Musk and the other AI players, not be popcorn popping wilters on the sidelines.


FOOTNOTE 1: According to ChatGPT-3.5 (I used also ChatGPT-4, but liked this result 3.5 result better) the term “edifice of error” typically refers to a complex or elaborate structure built upon flawed or incorrect assumptions, beliefs, or principles. It suggests that the foundation or basis of the structure is flawed, leading to subsequent errors or falsehoods that are built upon it.

ADDENDUM FOR JULY 29: I’ve since signed the Pause Giant AI Experiments: An Open Letter so I can be in the loop (although this letter did come out a few months ago in March of this year). I’m now trying to find evidence of the work being done by those claiming to sit on the pause button. Musk certainly is not pausing within the A.I. industry, yet he is the third signature on the open letter.

AI Leverage for All; Opt-Out for None

Steven Bartlett interviews Mo Gawdat about his thoughts on artificial intelligence and his book, ‘Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World.’

Game over. We are too late. Our way of life is never going to be the same again.

Mo Gawdat asks and answers his own questions in his conversation with Steven Bartlett: What went wrong? He explains that we gave too much power to people who didn’t assume the responsibility. “We have disconnected power with responsibility,” he claims. We should now have a great sense of URGENCY and EMERGENCY.

Earlier this month I listened to at least three different AI “experts” say that the best way to not be essentially run over by AI is to leverage AI. And on the same vein of that thinking, it went that those who don’t leverage AI will be left in the technical and intellectual dust of those who leverage.

Gawdat’s solution? It’s smarter to create out of abundance. When choosing as we work with and on AI, we should choose to be ethical. He also said we might as well choose a job that makes the world a better place. Government needs to act now for regulations of AI, before AI will be advanced beyond what regulations can do. According to Gawdat, we can only regulate until AI gets smarter than we are.

Speaking of governments and not moving fast enough, Senate Majority Leader Chuck Schumer announced June 21 a “broad, open-ended plan for regulating artificial intelligence” that will start with about nine panels of experts from “industry, academia and civil society” that will “identify and discuss the hardest questions that regulations on AI will have to answer, including how to protect workers, national security and copyright and to defend against ‘doomsday scenarios.’” After these discussions. the Senate will “turn to committee chairs and other vocal lawmakers on AI legislation to develop bills reflecting the panel discussions.”

That process sounds too slow for what is predicted by Gawdat… and by the others listed in Schumer’s doomsday scenarios.

The worst of us have negativity bias. The best of us need to take charge. “Creating an artificial intelligence that is going to make someone richer is not what we need.” Gawdat believes if just 1% of us engaged to leverage AI, we would be able to raise AI in a helpful-to-humanity way, and not let the world devolve into a doomsday dilemma from which we couldn’t recover. None of us really can just opt-out.