• Sentient
  • Posts
  • 🤖 Meta’s latest video model

🤖 Meta’s latest video model

PLUS: OpenAI’s approach to safety

Happy Friday!

Here’s what’s on the menu:

  • Meta’s latest video model 🎥

  • OpenAI’s approach to safety 🦺

  • What is Anthropic’s $5B plan? 📝

Meta’s latest video model 🎥

Meta just dropped their latest AI model, the Segment Anything Model (SAM) and it’s pretty impressive.

The model is a promptable segmentation system that can “cut out” any object in any image.

Check it out in action:

SAM has the ability to:

  • Segment objects with just a click or by interactively clicking points to include and exclude from the object

  • Output multiple valid masks when faced with uncertainty about the object being selected

  • Automatically find and mask all objects in an image

  • Generate a segmentation mask for any task in real-time

The dataset that runs SAM contains more than 1 billion segmentation masks and is 400x bigger than any previous segmentation dataset.

Check out the demo here.

OpenAI’s approach to safety 🦺

As AI continues to evolve rapidly, more players are taking safety into account.

We don’t want an Ex Machina scenario on our hands.

Perhaps the biggest AI company, OpenAI, recently published a blog post discussing their approach to AI safety.

Here’s the breakdown:

  • Building safe AI systems: Any new systems undergo rigorous testing to improve the model’s behavior. Work with reinforcement learning using human feedback is subsequently used to build broad safety and measuring systems. For example, before GPT-4 was released it underwent more than 6 months of testing to make it more aligned and safer.

  • Learning from real-world use to improve: New systems are cautiously and gradually released to a broadening group of people to make continuous improvements. Developers can use the API to build new technology directly into their apps, which allows OpenAI to monitor the systems

  • Protecting Children: People must be 18 or older, or 13 or older with parental approval to use OpenAI’s AI tools. GPT-4 is 82% less likely to respond to requests for disallowed content compared to GPT-3.5.

  • Privacy: Data is used to make OpenAI’s models more helpful for people, not for advertising or building profiles of people. According to OpenAI, the models are designed to learn about the world, not people.

  • Factual Accuracy: The factual accuracy of GPT-4 has improved drastically. The new model is 40% more likely to produce factual content than GPT-3.5.

Bottom Line: Safety is a crucial step in creating new AI systems. Extensive experimentation and engagement are necessary to create effective and safe AI.

Check out the full blog post here.

What is Anthropic’s $5B plan? 📝

Anthropic is setting its sights high.

A recent pitch deck for Anthropic’s Series C fundraising round disclosed long-term goals for the AI research startup.

  • Raise $5B over the next two years to take on rival OpenAI

  • Build a “frontier model” tentatively called “Claude-Next” - that is 10x more powerful than today’s most capable AI

The frontier model is described as a “next-gen algorithm for AI self-teaching.” The model could be used to perform research, generate books, build virtual assistants, and more, similar to GPT-4. 👀

“We believe that companies that train the best 2025/2026 models will be too far ahead for anyone to catch up in subsequent cycles” reads the pitch deck. “These models could begin to automate large portions of the economy.”

We’re excited to see what Anthropic builds next.

That’s it for today, we’ll see you back here next Monday!

also Happy Easter! 🐣