Must We Musk to Understand the Universe?

Today Elon Musk announced the formation of xAI. “The goal of X.AI is to understand the true nature of the universe.” The company has set up a meet-and-greet of the new team during a Twitter Spaces chat on Friday, July 14.

I have been experimenting with ChatGPT and just upgraded to ChatGPT-4. I have been charmed by the speed in which answers are generated. Some results have been spot-on, while others have hit random, far-reaching bumps in the ether. My searches and requests haven’t been in the realm, however, where woke culture intercedes or pops up so I haven’t yet experienced the “destruction” of civilization that Musk has been warning about with unregulated AI that includes woke intrusions alongside blatant hallucinations.

To counter-act this culture Musk claims has invaded OpenAI (a company he co-founded), he and colleagues are pushing out a new company called xAI to rival—through truth-telling— OpenAI and other presumptuous AI companies. According to Reuters today, Musk registered a firm named X.AI Corp, incorporated in Nevada, in March of this year and made the public aware of the company today.

My questions then become “whose truth are we telling through AI” and “what truths will be told.” Back in 2019,  Elon Musk urged as the “utility function” of AI technology, “… a close coupling between collective human intelligence and digital intelligence”  to “maximize the freedom of action of humanity” (Elon Musk on Artificial Intelligence 2019). I really liked that idea back then when he said it and I still do now.

Perhaps, however, my idea of freedom of action is different than what Musk intended by his observation. I do believe, as Musk does, that “There is a risk that advanced AI either eliminates or constrains humanity’s growth.” I just think that setting up a company claiming to “tell the truth of reality” is already putting up another infused-with-bias edifice of error. [See footnote 1 below.]

I’m of the opinion that the AI cruise ship has sailed, we are all inside it as if on a big floating petri dish of humanity, and it will take all of us as we work through the ins and outs of AI to steer it. I agree with Mo Gawdat in the introduction of his book Scary Smart where he talks about the new superheroes for AI: “It is not the experts who have the capability to alleviate the threat facing humanity as a result of the emergence of superintelligence. No, it is you and I who have that power. More importantly, it is you and I who have that responsibility.”

We ALL have to step up to the plate and profess our truths about our universe and work WITH Musk and the other AI players, not be popcorn popping wilters on the sidelines.


FOOTNOTE 1: According to ChatGPT-3.5 (I used also ChatGPT-4, but liked this result 3.5 result better) the term “edifice of error” typically refers to a complex or elaborate structure built upon flawed or incorrect assumptions, beliefs, or principles. It suggests that the foundation or basis of the structure is flawed, leading to subsequent errors or falsehoods that are built upon it.

ADDENDUM FOR JULY 29: I’ve since signed the Pause Giant AI Experiments: An Open Letter so I can be in the loop (although this letter did come out a few months ago in March of this year). I’m now trying to find evidence of the work being done by those claiming to sit on the pause button. Musk certainly is not pausing within the A.I. industry, yet he is the third signature on the open letter.

AI Leverage for All; Opt-Out for None

Steven Bartlett interviews Mo Gawdat about his thoughts on artificial intelligence and his book, ‘Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World.’

Game over. We are too late. Our way of life is never going to be the same again.

Mo Gawdat asks and answers his own questions in his conversation with Steven Bartlett: What went wrong? He explains that we gave too much power to people who didn’t assume the responsibility. “We have disconnected power with responsibility,” he claims. We should now have a great sense of URGENCY and EMERGENCY.

Earlier this month I listened to at least three different AI “experts” say that the best way to not be essentially run over by AI is to leverage AI. And on the same vein of that thinking, it went that those who don’t leverage AI will be left in the technical and intellectual dust of those who leverage.

Gawdat’s solution? It’s smarter to create out of abundance. When choosing as we work with and on AI, we should choose to be ethical. He also said we might as well choose a job that makes the world a better place. Government needs to act now for regulations of AI, before AI will be advanced beyond what regulations can do. According to Gawdat, we can only regulate until AI gets smarter than we are.

Speaking of governments and not moving fast enough, Senate Majority Leader Chuck Schumer announced June 21 a “broad, open-ended plan for regulating artificial intelligence” that will start with about nine panels of experts from “industry, academia and civil society” that will “identify and discuss the hardest questions that regulations on AI will have to answer, including how to protect workers, national security and copyright and to defend against ‘doomsday scenarios.’” After these discussions. the Senate will “turn to committee chairs and other vocal lawmakers on AI legislation to develop bills reflecting the panel discussions.”

That process sounds too slow for what is predicted by Gawdat… and by the others listed in Schumer’s doomsday scenarios.

The worst of us have negativity bias. The best of us need to take charge. “Creating an artificial intelligence that is going to make someone richer is not what we need.” Gawdat believes if just 1% of us engaged to leverage AI, we would be able to raise AI in a helpful-to-humanity way, and not let the world devolve into a doomsday dilemma from which we couldn’t recover. None of us really can just opt-out.

When AI “Gets” Teeth

Rushi Ganmukhi, founder and CEO of Bola joins Garrett Clark, Managing Director of Silicon Slopes, in a conversation about the use of AI on June 16, 2023.


I went to Lehi for a presentation on the use of AI in the dental industry. The presentation was by Rushi Ganmukhi, founder and CEO of Bola AI, who developed AI technology to optimize the dental visit workflow through improved efficiency and to enhance the patient experience.

Rushi comes from a product engineering background and found a niche market in dentistry. He thinks AI technology will spin off into many other types of niche markets. Rushi commented that “data is the new oil” and that the “speed of product development is going to be crazy in the next couple of years.” He said entrepreneurs need to have a crystal clear focus on customers and have a strategic vision. To stay ahead of the AI curve, he said to let engineers play around with the new AI technology, work with partners on new technology, and invest back into team.

Speaking of partners, Garrett and Rushi agreed that a Trojan Horse way into AI is through partnerships. I liked that comment and I need to figure out what my Trojan Horse is into AI. Garrett reiterated what was said the day before at the Silicon Slopes Summit—that those who will still have their jobs during this time through AI transitioning are those who can leverage AI.

I enjoyed seeing the conversation in person. If you choose to view the interview, make sure you listen for the dental puns Garrett drops along the way. These keep the conversation vibrant.

Silicon Slopes AI Summit Hallmarks Industry Professionals

Tyler Folkman, Chief Technology Officer and Chief AI Officer from BENlabs, Branded Entertainment Network, speaks at the Silicon Slopes AI Summit June 15, 2023.

Every day I look at the headlines about AI and read up on the local, national and international discussions surrounding all its aspects such as machine learning, deep learning, neural networks, automation, robotics and so forth. I’m also on the email list for our local chapter of Silicon Slopes. I noticed the other day in one of the emails that an upcoming event was to feature AI professionals. I jumped at the chance to register and attend the summit.

The inaugural Silicon Slopes AI Summit June 15 featured Wayne Vought, UVU Provost; Sean Reyes, Utah Attorney General; Dave Wright, CEO and Founder of Pattern; Dr. Rachel Bi, Associate Professor, UVU; Tyler Folkman, Chief Technology Officer and Chief AI Officer at BENlabs; and James Thorton, Chairman and CEO of Tafi and Daz 3D.

I took several pictures of speaker slides, but the one I feature at the top of this blog explains the given—AI starts with data. This fact alone is the impetus of AI and also an important set-up point for other aspects of AI such as regulation, policy, truth sets, bias, trust, privacy and more.

Here are the items I want to remember from the summit:

  • Need clean data
  • Generative AI is the game changer in entertainment
  • Use AI to better engage with audiences
  • Act like AI is to scale you
  • Leverage AI to succeed
  • Apply AI to what you love
  • Limitless growth by correctly using AI
  • Layer on AI technology in various ways
  • Apply AI to what you love to do
  • AI changes the rules of the game
  • The art of asking the right questions is the art of AI
  • AI can free us from mundane tasks
  • AI is at the forefront of innovation
  • Be able to deliver properly curated AI to the rising generation
  • UAE takes AI seriously!

There also were several cautionary tales about AI development and its harming potential. Reyes said that we need to “get arms around this tornado.” He posed the question: “What do we do when other countries use AI unethically?” He said that AI technology has been and will be made disabling and disrupting because the technology can help recruit and indoctrinate. He invited us all to be actively engaged with policy makers as AI technology moves forward.

The Religiosity of Artificial Intelligence and Climate Change

Man opening a stylized, futuristic Pandora's box to show AI and climate change that have been unleashed on the world.

I can’t stop thinking about an article I read a few days ago in the New York Times. Opinion columnist Thomas L. Friedman observed that we as a species are “lifting the lids on two giant Pandora’s boxes at the same time, without any idea of what could come flying out.”

Friedman labels these boxes as artificial intelligence (AI) and climate change (CC) and puts these entities into the semblance of religious frameworks. Friedman invokes godliness in saying that we are surpassing our own evolution and “manufacturing” AI in a “godlike” way. With CC we are “driving ourselves in a godlike way from one climate epoch into another” surpassing the “natural forces” that usually initiate such power. Friedman makes these observations in light of the question: “What kind of regulations and ethics must we put in place to manage what comes screaming out?”

Friedman cautions that we just can’t move recklessly forward with AI like we did on social media. Our relentless, fast-moving march with social networks broke the societal pillars of truth and trust. These media platforms, though well-intentioned for human connection and democratic dialogue when first created, have unleashed unimagined consequences in the lives of billions of people [Friedman quotes Dov Seidman here].

Seidman follows up by stating emphatically that there is “an urgent imperative — both ethical and regulatory — that these artificial intelligence technologies should only be used to complement and elevate what makes us uniquely human: our creativity, our curiosity and, at our best, our capacity for hope, ethics, empathy, grit and collaborating with others.”

As the article moves along, Friedman continues to support this human-elevating narrative by quoting other AI gurus who are looking at AI and coming to the same conclusions: We have to approach AI in a way that is responsible. And even more than this, we are going to have to “decide on some very big trade-offs as we introduce generative A.I.”

And that’s where we get to why I can’t stop thinking about this article:


And government regulation alone will not save us. I have a simple rule: The faster the pace of change and the more godlike powers we humans develop, the more everything old and slow matters more than ever — the more everything you learned in Sunday school, or from wherever you draw ethical inspiration, matters more than ever.

Thomas L. Friedman

Friedman then moves to the topic of CC and says “ditto” to the quote above as what we should apply for CC. He quotes scientists who observe that we are in a “man-made epoch, the Anthropocene” that “will have none of the predictable seasons of the Holocene.” Friedman then asks us to emphatically consider that AI could give us a “do-over” in CC and act as a “savior…that enable humans to manage the impacts of climate change that are now unavoidable and to avoid those that would be unmanageable.”

He concludes by stating that if “AI gives us a way to cushion the worst effects of climate change…we had better do it over right.”

Friedman’s parting shot motivated me to keep pushing ahead in my studies with AI:

God save us if we acquire godlike powers to part the Red Sea but fail to scale the Ten Commandments.

The Process by Which I Came to AI and Started Down a Path of Study

Before I started the IP&T 520 class this Spring Term (2021), I didn’t even have the idea of Artificial Intelligence (AI) in my head as a way to help with lifelong education. I just knew that there were some instructional design theories still undiscovered—according to Elder L. Tom Perry (1996)—and I wanted to find out what they were. As I stated in my Letter of Intent for my application to the IP&T program: As I’ve thought more about the quote from Elder Perry, I imagine that these “new” designs and learning theories will probably be centered around the personal learning environments of the individual and the integration of these personal learnings into a community of learners. 

About 15 years ago I took a class from Dr. Andrew Gibbons at BYU and became interested in design layers and design languages. I knew when I came back for a PhD that I wanted to look more into these areas as possible ways to help structure learning. I also came to this 520 class with Mary Parker Follett’s (1918, 1924),  observations about teaching and learning, and the teacher/student relation. I knew that I wanted to incorporate her ideas into anything that I would develop as a theory of design, and/or that the design theories I was looking for would possess a semblance of her ideas. 

As I mentioned in my 520 journal, my idea for AI in education and to find out if/how it is used, came about not from reading about it for education. I wanted a way for me to use AI to act as a memory storage and retrieval unit with some design layers mixed in to keep everything straight—and I wanted my “Aha!” learning moments to be preserved. I wanted AI technology to hold onto things that I had learned previously and preserve the emotional reactions from the moment when the learning had occurred. In tandem with the functions of this tool, I wanted AI to bring to mind—such as a periodical reminder through Siri or Alexa— things I had committed to memory over time so I could hang on to the learnings. 

When we were assigned in class to officially Tell Heather What You Will Do for Your Final Project I came up with the following: I want to write a literature review discussing AI as a curation tool for niche/personalized education and/or the roles of AI in Education, and develop ideas about design layers that may be involved in using such a tool.