• Sentient
  • Posts
  • 🤖 What does Stanford think of AI?

🤖 What does Stanford think of AI?

PLUS: President Biden’s AI meeting

Happy Thursday!

In Today’s Email:

  • What does Stanford think of AI? 🏫

  • President Biden’s AI meeting 🇺🇸

  • Berkley releases Koala-13B 🐨

What does Stanford think of AI? 🏫

Stanford just released a 368-page report on the state of AI.

Don’t worry, we read it so you don’t have to.

Here are some of the key takeaways:

  • Industry>Academia: “Until 2014, most significant machine learning models were released by academia. Since then, industry has taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. Building state-of-the-art AI systems increasingly requires large amounts of data, computer power, and money—resources that industry actors inherently possess in greater amounts compared to nonprofits and academia.”

  • AI is helping and harming the environment: “New research suggests that AI systems can have serious environmental impacts. According to Luccioni et al., 2022, BLOOM’s training run emitted 25 times more carbon than a single air traveler on a one-way trip from New York to San Francisco. Still, new reinforcement learning models like BCOOLER show that AI systems can be used to optimize energy usage.”

  • AI is outperforming scientists: “AI models are starting to rapidly accelerate scientific progress and in 2022 were used to aid hydrogen fusion, improve the efficiency of matrix manipulation, and generate new antibodies.”

The report was incredibly insightful, check it out here.

President Biden’s AI meeting 🇺🇸

The US recently chimed in on the state of AI.

Yesterday, US President Joe Biden met with the President’s Council of Advisors on Science and Technology (PCAST) to discuss the dangers of AI.

Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” Biden said.

Biden told advisors AI could be used for good, such as addressing disease and climate change. When asked if AI is dangerous, he stated, “It remains to be seen. It could be.”

AI is starting to become a talking point for many US policymakers.

  • US Senator Chris Murphy has warned against the development of AI, calling society to pause to consider the ramifications of AI

  • Last year the Biden administration released a “Bill of Rights” for the development of AI systems, the blueprint is designed to protect users’ rights as technology companies continue to design and develop AI

  • The Center for Artificial Intelligence and Digital Policy has asked the US Federal Trade Commission to pause commercial releases of OpenAI’s GPT-4

What do you think of US legislation and the state of AI?

Berkley releases Koala-13B 🐨

The Berkeley Artificial Intelligence Research (BAIR) lab recently introduced Koala-13B.

The chatbot is trained on Meta’s LLaMA model, using dialogue gathered from the web. Rather than maximizing the quantity of web data, the team at Berkeley focused on collecting “a small high-quality dataset.

Experiments were run comparing Koala (Koala Test Set) and Stanford’s Alpaca (Alpaca Test Set) on 180 test queries. Koala-Distill only uses distillation data. Koala-All uses all data, including distillation and open-source data.

The team at Berkeley also touched on the limitations and safety of the model.

  • Biases and Stereotypes: “Our model will inherit biases from the dialogue data it was trained on, possibly perpetuating harmful stereotypes, discrimination, and other harms.”

  • Lack of Common Sense: “While large language models can generate text that appears to be coherent and grammatically correct, they often lack common sense knowledge that humans take for granted. This can lead to nonsensical or inappropriate responses.”

  • Limited Understanding: “Large language models can struggle to understand the context and nuances of a dialogue. They can also have difficulty identifying sarcasm or irony, which can lead to misunderstandings.”

Although Koala is a research prototype, it suggests that models small enough to be run locally can compete with large models if they’re trained on carefully sourced data. 🤯

Check out the interactive demo here.

That’s it for today, we’ll see you back here tomorrow!