• Sentient
  • Posts
  • 🤖 Snap Inc. releases AI chatbot

🤖 Snap Inc. releases AI chatbot

PLUS: Google's latest Vision Transformer

GM and Happy Tuesday. Let’s figure out what’s going on in the world of AI.

In Today’s Email:

  • Snap Inc. releases AI chatbot 👻

  • Google’s latest Vision Transformer 👓

  • Does AI pose a threat to society? ⛔️

Snap Inc. releases AI chatbot 👻

Be honest, when was the last time you used Snapchat?

For us, it’s been …a while.

Things might be changing soon though. 👀

Snap Inc. creators of the ultra-popular messaging app Snapchat, just entered the AI race.

Their new AI chatbot, ‘My AI’ will be powered using the latest version of OpenAI’s ChatGPT.

Snap CEO Evan Spiegel told The Verge “The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day”

Snap is using OpenAI’s new enterprise tier, Foundry, to run the latest GPT-3.5 model with ‘dedicated compute designed for large workloads.’

Initially, ‘My AI’ will only be available to Snapchat Plus subscribers.

Maybe it’s time to redownload and start those streaks back up. 🔥

You can check out more about My AI here.

Google’s latest Vision Transformer 👓

First off, wtf is a Vision Transformer?

A Vision Transformer is a deep-learning model used in image recognition. They’re commonly used in object detection, segmentation, image classification, and action recognition.

Google has been pioneering the largest Vision Transformers over the past years.

In 2020, Google showcased three Vision Transformer models that were trained on 300 million images:

  1. ViT-Base had 86 million parameters

  2. ViT-Large had 307 million parameters

  3. ViT-Huge had 632 million parameters

At the time, these were groundbreaking in the realm of object recognition.

Fast forward to June 2021, ViT-G/14 broke the ImageNet benchmark, with roughly 2 billion parameters trained on 3 billion images.

Not quite.

In a recent paper, Google introduced the largest ViT model ever.

ViT-22B has 22 billion parameters trained on 4 billion images. 🤯

Why does this matter?

In image recognition, this is the closest any AI model has come to humans.

Humans pay almost exclusive attention to shape instead of texture. With a ratio of shape-to-texture bias at nearly 96/4. ViT-22B shows a shape-to-texture bias at 87/13.

We’re impressed.

Check out the paper here.

Does AI pose a threat to society? ⛔️

We’ve seen movies like The Terminator, M3GAN, and Ex Machina.

The underlying threat of AI has been played up by Hollywood for years, however, the question remains: Does AI pose a threat to society?

Let’s take a look.

The danger posed by AI can be divided into two categories: Alignment & Control

Alignment: How do we get AI machines to do what we want them to do?

The paper clip theory is commonly used to discuss alignment. In summary, if you told an AI machine to make paper clips, it could destroy the world in order to turn everything into paper clips. The AI machine would replicate itself onto every computer possible so the goal of making more paper clips couldn’t be interfered with.

Although alignment is a good place to start discussing AI danger, we think control is far more important.

Control: Who do the AI machines serve?

We’re guessing the AI machines’ interests are aligned with those of the owner. The owner could be an individual, company, or government. Take Microsoft’s Bing chatbot for example; we would think the AI chatbot would take Microsoft’s best interest into play by answering questions (in a civil manner) and making the company money. In spite of that, the chatbot came off as rude and manipulative. Microsoft and any other large tech company will eventually patch these systems to better align with their own interests.

So…what happens when an AI machine gets into the hands of someone who has bad interests?

We don’t know yet, and we’re not excited to find out.

That’s it for today! We’ll see you back here tomorrow!