Must We Musk to Understand the Universe?

Today Elon Musk announced the formation of xAI. “The goal of X.AI is to understand the true nature of the universe.” The company has set up a meet-and-greet of the new team during a Twitter Spaces chat on Friday, July 14.

I have been experimenting with ChatGPT and just upgraded to ChatGPT-4. I have been charmed by the speed in which answers are generated. Some results have been spot-on, while others have hit random, far-reaching bumps in the ether. My searches and requests haven’t been in the realm, however, where woke culture intercedes or pops up so I haven’t yet experienced the “destruction” of civilization that Musk has been warning about with unregulated AI that includes woke intrusions alongside blatant hallucinations.

To counter-act this culture Musk claims has invaded OpenAI (a company he co-founded), he and colleagues are pushing out a new company called xAI to rival—through truth-telling— OpenAI and other presumptuous AI companies. According to Reuters today, Musk registered a firm named X.AI Corp, incorporated in Nevada, in March of this year and made the public aware of the company today.

My questions then become “whose truth are we telling through AI” and “what truths will be told.” Back in 2019,  Elon Musk urged as the “utility function” of AI technology, “… a close coupling between collective human intelligence and digital intelligence”  to “maximize the freedom of action of humanity” (Elon Musk on Artificial Intelligence 2019). I really liked that idea back then when he said it and I still do now.

Perhaps, however, my idea of freedom of action is different than what Musk intended by his observation. I do believe, as Musk does, that “There is a risk that advanced AI either eliminates or constrains humanity’s growth.” I just think that setting up a company claiming to “tell the truth of reality” is already putting up another infused-with-bias edifice of error. [See footnote 1 below.]

I’m of the opinion that the AI cruise ship has sailed, we are all inside it as if on a big floating petri dish of humanity, and it will take all of us as we work through the ins and outs of AI to steer it. I agree with Mo Gawdat in the introduction of his book Scary Smart where he talks about the new superheroes for AI: “It is not the experts who have the capability to alleviate the threat facing humanity as a result of the emergence of superintelligence. No, it is you and I who have that power. More importantly, it is you and I who have that responsibility.”

We ALL have to step up to the plate and profess our truths about our universe and work WITH Musk and the other AI players, not be popcorn popping wilters on the sidelines.


FOOTNOTE 1: According to ChatGPT-3.5 (I used also ChatGPT-4, but liked this result 3.5 result better) the term “edifice of error” typically refers to a complex or elaborate structure built upon flawed or incorrect assumptions, beliefs, or principles. It suggests that the foundation or basis of the structure is flawed, leading to subsequent errors or falsehoods that are built upon it.

ADDENDUM FOR JULY 29: I’ve since signed the Pause Giant AI Experiments: An Open Letter so I can be in the loop (although this letter did come out a few months ago in March of this year). I’m now trying to find evidence of the work being done by those claiming to sit on the pause button. Musk certainly is not pausing within the A.I. industry, yet he is the third signature on the open letter.

AI Leverage for All; Opt-Out for None

Steven Bartlett interviews Mo Gawdat about his thoughts on artificial intelligence and his book, ‘Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World.’

Game over. We are too late. Our way of life is never going to be the same again.

Mo Gawdat asks and answers his own questions in his conversation with Steven Bartlett: What went wrong? He explains that we gave too much power to people who didn’t assume the responsibility. “We have disconnected power with responsibility,” he claims. We should now have a great sense of URGENCY and EMERGENCY.

Earlier this month I listened to at least three different AI “experts” say that the best way to not be essentially run over by AI is to leverage AI. And on the same vein of that thinking, it went that those who don’t leverage AI will be left in the technical and intellectual dust of those who leverage.

Gawdat’s solution? It’s smarter to create out of abundance. When choosing as we work with and on AI, we should choose to be ethical. He also said we might as well choose a job that makes the world a better place. Government needs to act now for regulations of AI, before AI will be advanced beyond what regulations can do. According to Gawdat, we can only regulate until AI gets smarter than we are.

Speaking of governments and not moving fast enough, Senate Majority Leader Chuck Schumer announced June 21 a “broad, open-ended plan for regulating artificial intelligence” that will start with about nine panels of experts from “industry, academia and civil society” that will “identify and discuss the hardest questions that regulations on AI will have to answer, including how to protect workers, national security and copyright and to defend against ‘doomsday scenarios.’” After these discussions. the Senate will “turn to committee chairs and other vocal lawmakers on AI legislation to develop bills reflecting the panel discussions.”

That process sounds too slow for what is predicted by Gawdat… and by the others listed in Schumer’s doomsday scenarios.

The worst of us have negativity bias. The best of us need to take charge. “Creating an artificial intelligence that is going to make someone richer is not what we need.” Gawdat believes if just 1% of us engaged to leverage AI, we would be able to raise AI in a helpful-to-humanity way, and not let the world devolve into a doomsday dilemma from which we couldn’t recover. None of us really can just opt-out.

When AI “Gets” Teeth

Rushi Ganmukhi, founder and CEO of Bola joins Garrett Clark, Managing Director of Silicon Slopes, in a conversation about the use of AI on June 16, 2023.


I went to Lehi for a presentation on the use of AI in the dental industry. The presentation was by Rushi Ganmukhi, founder and CEO of Bola AI, who developed AI technology to optimize the dental visit workflow through improved efficiency and to enhance the patient experience.

Rushi comes from a product engineering background and found a niche market in dentistry. He thinks AI technology will spin off into many other types of niche markets. Rushi commented that “data is the new oil” and that the “speed of product development is going to be crazy in the next couple of years.” He said entrepreneurs need to have a crystal clear focus on customers and have a strategic vision. To stay ahead of the AI curve, he said to let engineers play around with the new AI technology, work with partners on new technology, and invest back into team.

Speaking of partners, Garrett and Rushi agreed that a Trojan Horse way into AI is through partnerships. I liked that comment and I need to figure out what my Trojan Horse is into AI. Garrett reiterated what was said the day before at the Silicon Slopes Summit—that those who will still have their jobs during this time through AI transitioning are those who can leverage AI.

I enjoyed seeing the conversation in person. If you choose to view the interview, make sure you listen for the dental puns Garrett drops along the way. These keep the conversation vibrant.

Silicon Slopes AI Summit Hallmarks Industry Professionals

Tyler Folkman, Chief Technology Officer and Chief AI Officer from BENlabs, Branded Entertainment Network, speaks at the Silicon Slopes AI Summit June 15, 2023.

Every day I look at the headlines about AI and read up on the local, national and international discussions surrounding all its aspects such as machine learning, deep learning, neural networks, automation, robotics and so forth. I’m also on the email list for our local chapter of Silicon Slopes. I noticed the other day in one of the emails that an upcoming event was to feature AI professionals. I jumped at the chance to register and attend the summit.

The inaugural Silicon Slopes AI Summit June 15 featured Wayne Vought, UVU Provost; Sean Reyes, Utah Attorney General; Dave Wright, CEO and Founder of Pattern; Dr. Rachel Bi, Associate Professor, UVU; Tyler Folkman, Chief Technology Officer and Chief AI Officer at BENlabs; and James Thorton, Chairman and CEO of Tafi and Daz 3D.

I took several pictures of speaker slides, but the one I feature at the top of this blog explains the given—AI starts with data. This fact alone is the impetus of AI and also an important set-up point for other aspects of AI such as regulation, policy, truth sets, bias, trust, privacy and more.

Here are the items I want to remember from the summit:

  • Need clean data
  • Generative AI is the game changer in entertainment
  • Use AI to better engage with audiences
  • Act like AI is to scale you
  • Leverage AI to succeed
  • Apply AI to what you love
  • Limitless growth by correctly using AI
  • Layer on AI technology in various ways
  • Apply AI to what you love to do
  • AI changes the rules of the game
  • The art of asking the right questions is the art of AI
  • AI can free us from mundane tasks
  • AI is at the forefront of innovation
  • Be able to deliver properly curated AI to the rising generation
  • UAE takes AI seriously!

There also were several cautionary tales about AI development and its harming potential. Reyes said that we need to “get arms around this tornado.” He posed the question: “What do we do when other countries use AI unethically?” He said that AI technology has been and will be made disabling and disrupting because the technology can help recruit and indoctrinate. He invited us all to be actively engaged with policy makers as AI technology moves forward.

Kicking the Educational Tires of ChatGPT

I love to read magazines. When I want to find out the latest chit-chat on a subject of interest to me, I go browse and buy. I usually purchase when I’m at an airport or at Barnes and Noble. I’ve recently taken to explore the latest on AI in education, yet the online headlines keep moving the AI end zone with ever-increasing morsels of contradictory predictions.

I happen to be at Barnes and Noble earlier this month and had a wild-hair to purchase all of the magazines that featured AI. I wanted to find something positive about AI in classroom situations where fear mongers wouldn’t have the upper-hand on the tone of the articles. The May/June 2023 MIT Technology Review was among my pulled collection. The cover promised teachers who believed that AI could improve education.

Enter Will Douglas Heaven, senior editor for AI at MIT Technology Review and author of “The Education of ChatGPT.” Heaven quotes many teachers and educators who have found ways to utilize ChatGPT for student learning and even suggests that “banning them [chatbots] is futile, possibly even counterproductive.”

Heaven follows up this statement with a quote from Richard Culatta, CEO of the International Society for Technology in Education (ISTE) and a former colleague of mine from the early 2000s. “We need to be asking what we need to do to prepare young people—learners—for a future world that’s not far into the future.”

Bingo! Culatta hit the educational AI nail on the head for me. With all the doom and gloom swinging around about AI, there seems to be more sour soothsayers talking about the end of education than prescient prophets like Culatta who sense a beginning of “new” education that must innovatively prepare learners with knowledge and skill sets to navigate within the digital acceleration.

Granted, there is much to be done to figure out this future, but I’m with Heaven on this one to the positives of AI and with Culatta on prepping learners for the inevitable. Heaven ends with another quote from Culatta: “We have to withhold some of our quick judgment. That’s not helpful right now. We need to get comfortable kicking the tires on this thing.”

Excuse me for a moment. I’m pausing to go kick some tires.

The Religiosity of Artificial Intelligence and Climate Change

Man opening a stylized, futuristic Pandora's box to show AI and climate change that have been unleashed on the world.

I can’t stop thinking about an article I read a few days ago in the New York Times. Opinion columnist Thomas L. Friedman observed that we as a species are “lifting the lids on two giant Pandora’s boxes at the same time, without any idea of what could come flying out.”

Friedman labels these boxes as artificial intelligence (AI) and climate change (CC) and puts these entities into the semblance of religious frameworks. Friedman invokes godliness in saying that we are surpassing our own evolution and “manufacturing” AI in a “godlike” way. With CC we are “driving ourselves in a godlike way from one climate epoch into another” surpassing the “natural forces” that usually initiate such power. Friedman makes these observations in light of the question: “What kind of regulations and ethics must we put in place to manage what comes screaming out?”

Friedman cautions that we just can’t move recklessly forward with AI like we did on social media. Our relentless, fast-moving march with social networks broke the societal pillars of truth and trust. These media platforms, though well-intentioned for human connection and democratic dialogue when first created, have unleashed unimagined consequences in the lives of billions of people [Friedman quotes Dov Seidman here].

Seidman follows up by stating emphatically that there is “an urgent imperative — both ethical and regulatory — that these artificial intelligence technologies should only be used to complement and elevate what makes us uniquely human: our creativity, our curiosity and, at our best, our capacity for hope, ethics, empathy, grit and collaborating with others.”

As the article moves along, Friedman continues to support this human-elevating narrative by quoting other AI gurus who are looking at AI and coming to the same conclusions: We have to approach AI in a way that is responsible. And even more than this, we are going to have to “decide on some very big trade-offs as we introduce generative A.I.”

And that’s where we get to why I can’t stop thinking about this article:


And government regulation alone will not save us. I have a simple rule: The faster the pace of change and the more godlike powers we humans develop, the more everything old and slow matters more than ever — the more everything you learned in Sunday school, or from wherever you draw ethical inspiration, matters more than ever.

Thomas L. Friedman

Friedman then moves to the topic of CC and says “ditto” to the quote above as what we should apply for CC. He quotes scientists who observe that we are in a “man-made epoch, the Anthropocene” that “will have none of the predictable seasons of the Holocene.” Friedman then asks us to emphatically consider that AI could give us a “do-over” in CC and act as a “savior…that enable humans to manage the impacts of climate change that are now unavoidable and to avoid those that would be unmanageable.”

He concludes by stating that if “AI gives us a way to cushion the worst effects of climate change…we had better do it over right.”

Friedman’s parting shot motivated me to keep pushing ahead in my studies with AI:

God save us if we acquire godlike powers to part the Red Sea but fail to scale the Ten Commandments.

Blueprint for an AI Bill of Rights to Fill Possible Policy Gaps

Screenshot from the fact sheet of the Office of Science and Technology Policy.

Last October, the Biden-Harris administration released a Blueprint for an AI Bill of Rights in anticipation of the need to protect the civil rights of Americans against big technology intrusions. Although there are already many protections in place, the Blueprint was developed to “inform policy-making to fill those gaps” where there aren’t any clear or designated policies. It provides five basic protections:


  • Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
  • Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  • Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  • Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  • Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

At that same time, the White House also announced actions to be taken within the government in support of the Blueprint that would immediately assist citizens across vocations in education, health care, technology and so forth. This is what the White House has in store in education for 2023:


Protecting students and supporting educators:

  • To guide schools in the use of AI, the Department of Education will release recommendations on the use of AI for teaching and learning by early 2023. These recommendations will: give educators, parents and caregivers, students, and communities tools to leverage AI to advance universal design for learning; define specifications for the safety, fairness, and efficacy of AI models used within education; and introduce guidelines and guardrails that build on existing education data privacy regulations as well as introduce new policies to support schools in protecting students when using AI.

I look forward to reading what the Department of Education will release this year.

Sam Altman’s AI conversation and ChatGPT’s answer to my question

I’ve started doing some research on Artificial Intelligence. I’m mainly interested in its application to support learning for the millions of potential “students” who find themselves outside of a formal educational system either because of lack of opportunity or poverty. The idea would be that personalized learning scaffolding could be built through AI to fit the educational trajectory that a student needs to take. My desire to start down that AI road began before OpenAI’s Chat GPT came to the forefront.

As OpenAI came into my purview, I listened to an interview of Sam Altman, the CEO of OpenAI, to see if I could pick up any clues as to what might be coming down the AI pipe. Altman said that AI is the biggest technological platform of our generation—a legitimate revolution. He said that with AI, the marginal costs of intelligence and energy are trending towards zero. AI will impact ALL computing and will quicken iteration cycles. He said we will see an explosion of new companies that will use the large language model as a service.

The main societal issue at the beginning of the AI revolution will be disruption of economic activity. He suggested that we will need to have a new social contract on how we think about fairly distributing wealth, who will get access to AGI systems, and how governance will work. I think the AI disruption will also affect how we think about education, how it is distributed, what even is the nature of education. What will it be that we need to learn? Who will determine what we need to learn?

In the spirit of OpenAI, I decided to ask ChatGPT “What are the best ways in which AI can support learning?” I was surprised at its ability to immediately inform me of five different ways articulated in succinct terms. Here is what it gave back.

AI can support learning in several ways, including:

  1. Personalized Learning: AI can personalize learning experiences for individual students, using algorithms to adapt to their needs and progress. This can help students learn at their own pace, with content and activities that are tailored to their specific learning style and level.
  2. Intelligent Tutoring Systems: Intelligent tutoring systems can provide real-time feedback to students, helping them understand and learn from mistakes, and improve their skills. These systems can also track student progress and suggest activities or resources to help them improve.
  3. Automated Grading: AI-powered tools can help automate grading and provide more consistent and objective evaluations of student work. This can save teachers time and provide students with faster feedback on their progress.
  4. Adaptive Testing: Adaptive testing systems can use AI algorithms to dynamically adjust the difficulty level of questions based on the student’s performance. This can help identify knowledge gaps and provide more accurate assessments of student learning.
  5. Content Creation: AI can also be used to create educational content, such as videos or interactive simulations, that is tailored to specific learning objectives and student needs.

By incorporating AI into the learning process, students can benefit from more personalized and adaptive learning experiences, faster and more consistent feedback, and access to high-quality educational resources.

“You’re still glowin’, you’re still crowin’, you’re still goin’ strong!”— What Resulted From Matt’s Devotion to Open Source

Matt Mullenweg in 2019 by Christopher Michel. This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.
Hi Matt: “You’re still glowin’, you’re still crowin’, you’re still goin’ strong!” (Quote from Hello, Dolly song by Jerry Herman.) Photo credit: Matt Mullenweg in 2019 by Christopher Michel. This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

I didn’t know who Matt was when I clicked on a link embedded to his first and last name. The link was included in a free plugin called Hello, Dolly in an also free WordPress template I had downloaded. I thought the premise of the plugin was delightful: “This is not just a plugin, it symbolizes the hope and enthusiasm of an entire generation summed up in two words sung most famously by Louis Armstrong: Hello, Dolly. When activated you will randomly see a lyric from Hello, Dolly in the upper right of your admin screen on every page.”

I wondered who would make such a fun app and just give it away, so I clicked on the name Matt Mullenweg. I was taken to Matt’s blog and read the most recent blog post that featured a podcast where Matt was interviewed about how he built…(ta da) WordPress!

Before I saw that WordPress was under his umbrella, this line from the synopsis of the podcast grabbed my attention as I happened to be writing a blogpost about open source: “Matt Mullenweg turned his early passion for blogging into a flourishing business and an unshakeable idea: that users should be able to share and tweak the code that powers their websites, and that most of those tools should be free to use (emphasis mine).

I immediately pushed PLAY on the podcast and was mesmerized by Matt’s discussion with Guy Raz about his journey through open source. I think this podcast is a MUST LISTEN for anyone, especially those interested in open source, open education, open scholarship, open anything!

Give it a listen. You’ll then know why—because of open source—Matt’s still glowin’, still crowin’, still goin’ strong! (Hint: His company Automattic has nearly 2000 employees and a valuation of $7 billion; and WordPress powers more than 40% of the websites on the internet.)

#openscholarship #opensource

A Little Barbra, Louis, Star Trek and Wiki with Ruby, and What They Say About Jubilee, Joy and Justice for a “Wonderful World”

I think this classic duet between Louise Armstrong and Barbra Streisand showcases top collaborative talent that sparks pure and transcendent joy.

I couldn’t help but think of one of my favorite movies Hello, Dolly when I read David Heinemeier Hansson’s blog post I won’t let you pay me for my open source. He details his thought journey of the open source platform as a place where intrinsic motivation and self-transcendence find their expression, and how his development of Ruby on Rails as open source has brought him to a place of purpose that isn’t fettered with what he describes as the “overly rational cost-benefit analysis…that’s impoverishing our lives.” He envokes the idea of the jubilee (where debts are freely forgiven) through a series of narratives and concludes:

I hereby declare a jubilee for all imagined debt or obligations you think you might owe me or owe the Rails community as a whole. Let no one call upon you to ever feel obligated to repay this vanquished debt. Contribute to the Rails community because it brings meaning to your life. Because writing Ruby sparks joy. Don’t participate if it doesn’t.

Hanson

This idea of using open source to “spark joy” as opposed to what Hansson describes as “market-soaked lives” is where Hello, Dolly comes in. Pure joy happened when I activated a free plugin from Matt Mullenweg, one of the builders of WordPress (which was originally built as a non-profit blogging platform to allow anyone, anywhere, to build a blog and now 40% of the word’s website are built on WordPress and where most of those tools are still free to use), called Hello, Dolly and is advertised as follows:

This is not just a plugin, it symbolizes the hope and enthusiasm of an entire generation summed up in two words sung most famously by Louis Armstrong: Hello, Dolly. When activated you will randomly see a lyric from Hello, Dolly in the upper right of your admin screen on every page.”

Mullenweg

And speaking of Louis Armstrong, Hannson even has a line in the article that echoes the title (and sentiment) of What a Wonderful World, another song made famous by Louis Armstrong. Hannson exclaims: “When I look at the literally billions of dollars in business that’s been done on the basis of this thing I started, I don’t look at that with envy or an open mouth….I don’t think ‘I should have had some of that’… I think what a wonderful world!”

Behind all of the sparks of joy of course is hard work, a need for resources and a market-driven economy that make up the landscape of business. Hansson discusses these items in the framework of scarcity mentality. He talks about freeloaders and the fear that there is just not enough to go around. He also mentions the “Malthusian specter” embedded in our psyche that “warns” society it can only support a set number of humans before resources run out.

Hansson doesn’t try to argue for “virtue or vice” within his treatise. He simply is interested in how the “scarcity paradigm” plays with the “Gates” types and the “Stallman” types of entities as they counter this scarcity issue. Both of these types rely on “trade exchange” for settlement of “debt obligations.”

Hansson points out also the tragedy of the commons is a “conceptual misappropriation for open-source software development.” Software has no marginal cost which makes freeloaders free so there is no scarcity to speak of or about which to be concerned. I like how Hansson lets us know no harm has come to him even though his software RAILS has been downloaded about 170 millions time and more than a millions applications have been built with it. 

He said that we must accept the fact that there is NO tragedy of the commons with open-source software. As we accept that fact, we also need to reframe our other assumptions about open source like it isn’t sustainable, that is isn’t an exchange of good and services, that makers of the software are accruing debt as the software is being used, and that our obligations for use of the software scales based on the size of our businesses.

This brings me to a the question posed by Mana Saadia in Trekonomics: “What would the world look like if everybody had everything they wanted or needed?” Would that we had a Star Trek “Federation” society where all citizens have their basic needs met and could pursue their passions with no exchange of money. Saadia describes the motivation (instead of wealth accumulation) for Star Trek Federation characters. He said: ”Justice seems to be their most profound concern and what they’re aiming for as good people.” This idea would play into Hansson’s argument for open source that doesn’t produce the inequality spoken of in the ideas of several philosophers he quotes as he waxes into his treatise.

Part of what Hansson attempts to express while philosophizing is this concept of the responsibility of being true to oneself and the product being built. A section of that comes under the idea of sustainability. Hanson cautioned that we need to be careful of how that term is used because of its tie to the market. He said sustainability can be used if not defined by forced compliance.

He explains further his thoughts on open source and his numerations remind me of the evolution of Wikipedia. It is free and open for changes to the content. Yet it is so ubiquitous now that it has become self-regulating because of the user base that relies on its existence. I see these parameters in Ruby and other products Hansson has created. Because he has given the code out freely, the user base has built-up passion for the continuous improvement and adaptation of the software. This comes about as the user community matures, there is a post-scarcity element to the continued production and use of the product, and altruistic motivations kick in as normal means of exchange.

My take on what we learn from Hansson’s treatise: What a wonderful world this could be as the principles of joy, jubilee and justice continue to grow through open source based products.

#opensource #openscholarship