web analytics

AI Report 2024

data codes through eyeglasses
Photo by Kevin Ku on Pexels.com

Summary

This document offers selected news about various aspects of artificial intelligence. For additional background see Artificial Intelligence Report 2023.

Misconceptions

Some of the amazing breakthroughs being credited to AI are actually capabilities we’ve had for decades. In a video posted by NVIDIA in 2023, an example is given of service repair people being notified when there is an equipment failure. [Source] This is an example of how AI is being sold to investors, governments, tax payers and society on somewhat inaccurate grounds.

There are huge costs associated with AI, and thus huge profits to be made. So, even for applications where AI may be wasteful, excessive, and unnecessary, people and businesses will be enticed to use AI.

Basic computing methods and even basic electronics and switches can accomplish tasks much more efficiently, at much lower cost, with much more reliable outcomes.

Sensors and systems for predictive failure have existed prior to AI being employed for such purposes. For example, SMART systems can detect conditions likely to result in hardware failure, or mechanical behaviors indicative of an upcoming failure. This is done without the use of AI.

For these reasons, it is important to stay realistic and accurate regarding the strengths of AI so it can be used in the most practical ways possible.

Concerns

Issues of primary concern are: high energy usage, elimination of jobs, dehumanization of services, use in warfare, and overall impact on society.

On 6 May 2024, Warren Buffett speaking about AI stated: “It has enormous potential for good and enormous potential for harm… We let a genie out of the bottle when we developed nuclear weapons… AI is somewhat similar…” [Source: CNN, 6 May 2024]

According to the International Monetary Fund, “nearly 40% of global employment could be disrupted by AI.” [Source: CNN, 6 May 2024]

Areas of concern are not necessarily reasons to resist AI. They are focal points for study where one should consider impact more deeply and pursue solutions to avoid negative outcomes.

Resource Links

Here are links to further reading. Entries here are in alphabetical order by source.

  • Center for AI Safety — “An Overview of Catastrophic AI Risks,” Artificial intelligence (AI) has recently seen rapid advancements, raising concerns among experts, policymakers, and world leaders about its potential risks. As with all powerful technologies, advanced AI must be handled with great responsibility to manage the risks and harness its potential. [View]
  • Department of Homeland Security, Science and Technology — “Risks and Mitigation Strategies for Adversarial Artificial Intelligence,” As the artificial intelligence (AI) landscape evolves, the pursuit of smarter algorithms has given rise to an AI-based sub-discipline that blurs the lines between innovation and deception: adversarial AI (AAI), where AI systems not only make predictions and take actions, but can also engage in a strategic dance of deception and counter-deception. This deception can target both humans as well as AI-based systems, thus breaking the reliability of the AI systems themselves and shaking the faith we have in the digital content we consume. 22 Dec 2023 [View]
  • Forbes — “The 15 Biggest Risks Of Artificial Intelligence,” AI does present some significant dangers — from job displacement to security and privacy concerns — and encouraging awareness of issues helps us engage in conversations about AI’s legal, ethical, and societal implications. [View]
  • Harvard Business Review — “4 Types of Gen AI Risk and How to Mitigate Them,” Technology giants, such as Microsoft and Salesforce, have already integrated generative AI into many of their products and recently committed to offering organizations the option of creating customized versions of their Large Language Models (LLMs). 31 May 2024 [View]
  • IBM — “10 AI dangers and risks and how to manage them,” Artificial intelligence (AI) has enormous value but capturing the full benefits of AI means facing and handling its potential pitfalls. The same sophisticated systems used to discover novel drugs, screen diseases, tackle climate change, conserve wildlife and protect biodiversity can also yield biased algorithms that cause harm and technologies that threaten security, privacy and even human existence. Here’s a closer look at 10 dangers of AI and actionable risk management strategies. Many of the AI risks listed here can be mitigated, but AI experts, developers, enterprises and governments must still grapple with them. 3 Sep 2024 [View]
  • MIT — “What are the risks from Artificial Intelligence?,” A comprehensive living database of over 700 AI risks categorized by their cause and risk domain. [View]
  • National Institute of Standards and Technology — “3 AI Risks and Trustworthiness,” For AI systems to be trustworthy, they often need to be responsive to a multiplicity of criteria that are of value to interested parties. Approaches which enhance AI trustworthiness can reduce negative AI risks. [View]
  • National Telecommunications and Information Administration — “Recognize Potential Harms and Risks,” Many commenters, especially individual commenters, expressed serious concerns about the impact of AI. AI system potential harms and risks have been well-documented elsewhere. 27 Mar 2024 [View]
  • Scientific American — “AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype,” Effective regulation of AI needs grounded science that investigates real harms, not glorified press releases about existential risks. 12 Aug 2023 [View]
  • Stanford — “At what point do we decide AI’s risks outweigh its promise?,” SIEPR Senior Fellow Chad Jones takes a closer look at the two sides of advanced artificial intelligence: unprecedented growth and existential risk. 31 May 2024 [View]
  • University of Oxford — “World leaders still need to wake up to AI risks, say leading experts ahead of AI Safety Summit,” Leading AI scientists, including researchers from the University of Oxford, are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago. 21 May 2024 [View]

AI Education

Here is an ad for a MasterClass course on AI. (2 Sep 2024)

News

Below are videos from selected news coverage about AI. Some of these videos praise the hopeful possibilities of AI, and others warn about its dangers. It’s important to have a broad understanding of these perspectives to consider the overall impact of AI.

Bloomberg (25 Oct 2024)

“Biden Preps AI Warfare and Spying Plan” — The Biden administration said the US must accelerate adoption of artificial intelligence for military and intelligence uses, directing agencies to quickly deploy the most powerful systems in a safe manner, according to a new national security memorandum. Bloomberg Government reporter Courtney Rozen joins Caroline Hyde and Ed Ludlow to discuss on “Bloomberg Technology.” [Source]

Forbes (25 Oct 2024)

“Mozilla’s President Urges Consumers To Learn The Risks Of Different AI Systems” — Mark Surman, president of the Mozilla Foundation, joins Forbes Talks to discuss the importance of open-source AI. Surman explains why consumers should care about the AI they’re using, highlighting the risks of closed-source AI systems. Surman delves into privacy implications and shares insights into companies prioritizing user privacy in their AI practices. [Source]

BBC (10 Oct 2024)

“Are we sleepwalking into an AI nightmare?” — Artificial intelligence (AI) technology is developing at high speed, with big players like Google and ChatGPT-maker OpenAI transforming modern life. However, some experts fear AI could be used for malicious purposes. On this week’s AI Decoded, tech writer Parmy Olson and psychologist Gary Marcus discuss the implications of consolidated power in AI development. [Source]

BBC (6 Oct 2024)

“The AI already in your phone” — We’re slowly getting used to Artificial Intelligence doing uncannily human things – chatting with us, creating pictures and videos. But so far, all of this AI has used a lot of computing power. And in the last year or so, we’ve seen a new type of computer chip made specifically for AI, and your mobile phone. Spencer Kelly has been testing some of the latest AI features available to us. [Source]

CNBC (28 Jul 2024)

“How The Massive Power Draw Of Generative AI Is Overtaxing Our Grid” — There’s more than 8,000 data centers globally, but it’s not nearly enough to keep up with the power needs of generative AI. One ChatGPT query takes about 10 times as much energy as a typical Google search. Training one large language model can produce as much CO2 as the entire lifetime of five gas-powered cars and use as much water as a small country. Even if we generate enough power, our aging grid is increasingly unable to handle transmitting it to where it’s needed. That’s why data center companies like Vantage are building closer to where power is generated, while the industry invests in alternate energy source and creative ways to harden the grid. [Source]

WSF (26 Jul 2024)

“Can AI Read Your Mind?” — By identifying patterns in neural firings, non-invasive AI systems are learning to decode human thought and translate the result into language. Leading researchers Michael Blumenstein and Jerry Tang join Brian Greene to describe the latest achievements of such “mind reading technologies” and where this research may shortly take us. [Source]

Ray Kurzweil (27 Jun 2024)

“The Last 6 Decades of AI — and What Comes Next” — How will AI improve our lives in the years to come? From its inception six decades ago to its recent exponential growth, futurist Ray Kurzweil highlights AI’s transformative impact on various fields and explains his prediction for the singularity: the point at which human intelligence merges with machine intelligence. [Source]

On Point (20 May 2024)

Labor economist David Autor’s research shows how historically, technological advances hurt the incomes of middle- and working-class Americans. But when it comes to AI, Autor says the exact opposite could happen. On Point [Listen]

Wall Street Journal (20 May 2024)

“Microsoft vs. Apple: Satya Nadella Says AI-Focused Copilot+ PCs Beat Macs” — Microsoft’s new Copilot+ PCs with Qualcomm chips and AI Windows features aim to beat Apple’s MacBooks. WSJ’s Joanna Stern tried out the new laptops and sat down with Microsoft CEO Satya Nadella to ask all about it. [Source]

CBS (16 May 2024)

“CEO Sal Khan on why he thinks AI can become every student’s personal tutor” — Khan Academy Founder and CEO Sal Khan joins CBS Mornings” to discuss his new book, “Brave New Words: How AI Will Revolutionize Education (And Why That’s a Good Thing).” [Source]

CBS (8 May 2024)

“More workers are adopting AI, Microsoft finds” — A new Microsoft report shows that 75% of workers now use artificial intelligence at their jobs. Jared Spataro, Microsoft’s corporate vice president of AI at work, joins CBS News to discuss how the technology is changing workplaces. [Source]

Bloomberg (8 May 2024)

“Google CEO Sundar Pichai and the Future of AI” — Google & Alphabet CEO Sundar Pichai sits down exclusively with Bloomberg Originals Host & Executive Producer Emily Chang to discuss the future of search, rebuilding Google’s AI model Gemini from the ground up, competing with Microsoft and OpenAI, Google’s culture challenges, and how his upbringing prepared him for this moment. [Source]

Forbes (8 May 2024)

“How AI Will Shape Society Over The Next 20 Years” — This conversation between AI industry experts looks at how AI will affect what children learn in school, how we think about information and what the world will look like 20 years down the road. This conversation took place at Imagination In Action’s ‘Forging the Future of Business with AI’ Summit in April 2024. [Source]

Helen Toner (1 May 2024)

“How to Govern AI — Even If It’s Hard to Predict” — No one truly understands AI, not even experts, says Helen Toner, an AI policy researcher and former board member of OpenAI. But that doesn’t mean we can’t govern it. She shows how we can make smart policies to regulate this technology even as we struggle to predict where it’s headed — and why the right actions, right now, can shape the future we want. [Source]

DW News (29 Apr 2024)

“How AI causes serious environmental problems (but might also provide solutions)” — Companies and the academic community are rushing to take advantage of big, recent advances in artificial intelligence. In many cases, they’re using AI to minimize humanity’s impact on the environment and find solutions for the climate crisis. But AI consumes an enormous amount of natural resources, like water and electricity, in order to function. So, is AI a climate solution or problem? [Source]

Mark Zuckerberg (21 Apr 2024)

“Energy, not compute, will be the #1 bottleneck to AI progress” — [Source]

Graham Isador (29 Mar 2024)

“AI is listening to the forests. It could be a lifeline for species at risk.” — Excerpt: “With Arbimon, you can upload audio that you’ve recorded from the field, and an AI model can detect the presence of the species in that data. It does that very, very, quickly. If you compare that to the typical field of bioacoustics without AI, a biologist would spend 10 to 15 minutes analyzing a single 60-second audio file to detect a species. They’d have to listen to it multiple times. Imagine someone has collected 10,000 recordings. With AI machine learning and the help of Arbimon, you can do that analysis in seconds. After identifying species, Arbimon can overlay this information on top of maps, telling you where species are located. It starts giving you a very good indication of what the health and biodiversity in that area is. Right now, it has over 155 million recordings uploaded from 120 different countries. It’s by far the largest platform in the world for biodiversity monitoring using sound.” [Source: Dropbox]

NBC (6 Mar 2024)

“AI industry aims to revive nuclear power to fuel itself” — A Silicon Valley-backed company wants to build a 15 megawatt reactor based off an experiment done by the Idaho National Lab decades ago that could power the future of AI. NBC News’ Jacob Ward explains how the reactor will work and who could benefit from the project. [Source]

Background

Cathie Wood (18 Dec 2023)

“Why AI Will Spark Exponential Economic Growth” — Investor Cathie Wood explores this unique moment in technology, which she sees as being marked by the simultaneous evolution of five pivotal innovation platforms — a scenario unparalleled in history. Exploring the role of AI in reshaping economic paradigms, she predicts a surge in global GDP growth and productivity, underscoring the need for businesses and investors to adapt in order to keep up. [Source]

Sasha Luccioni (6 Nov 2023)

“AI Is Dangerous, but Not for the Reasons You Think” — AI won’t kill us all — but that doesn’t make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology’s current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it’s inclusive and transparent. [Source]

NVIDIA (24 Jul 2023)

“Powering the Future of Clean Energy” — See how #AI is helping accelerate the clean energy transition by optimizing solar and wind farms, simulating climate and weather, modernizing power grids, capturing carbon, and advancing fusion breakthroughs. [Source]

CBC (13 Jun 2023)

“AI’s hidden climate costs” — As the demand for artificial intelligence grows, so does its thirst for one of our planet’s most precious resources. Andrew Chang explores how the supercomputers that run AI stay cool — by using massive amounts of fresh, clean drinking water. [Source]

By Greg Johnson

Greg Johnson is a freelance writer and tech consultant in Iowa City. He is also the founder and Director of the ResourcesForLife.com website. Learn more at AboutGregJohnson.com