Welcome to our weekly roundup of 100% human AI news.
This week, an unfiltered Grok generated controversial images.
You may have to prove you’re human to use the internet.
And Google’s new phone does cool stuff we’re not sure it should.
Let’s dig in.
Picture this
When Grok 2 launched, its image generation capabilities stole the show for controversial reasons. Its apparent lack of filter quickly saw the internet awash with images that other models refuse to create.
Elon Musk’s ‘unwoke’ AI project has been heavily criticized. But how dangerous is it really? Is objective, unfiltered AI desirable or even achievable?
The potential for AI to spread disinformation was well illustrated this week. Donald Trump posted AI fakes to claim that Taylor Swift and her fans supported his bid to become the next US president.
Are Swifties leaning right? Does Trump believe the images are real? Was he only kidding? With AI, ‘reality’ is whatever you want it to be now.
Here’s what happens when you take Grok images and throw them into Kling.
The Hustle
Somebody said uncensored? Thank you @grok for letting us all have some fun
Note: We don’t hold any political stance, just having some fun @elonmusk#GROK #GROK2 #grokimages #Grok2images pic.twitter.com/w5y428fVJ9
— The Dor Brothers (@thedorbrothers) August 21, 2024
Are you human? Prove it.
The days of check boxes and CAPTCHAs separating humans from bots are long gone. AI-generated images, video, and audio are getting so good that it’s almost impossible for online services to tell if a user is human or AI.
A team of researchers, which includes OpenAI, says you might need ‘personhood credentials’ to prove you’re not AI if you want to use the internet in the future.
Should I need some agency’s affirmation to confirm I’m human? How long before Sam Altman suggests that his eye-scanning orb is the best way to do this?
Cooperation or capitulation?
OpenAI is trying to stay on the straight and narrow while its lawsuit with the New York Times continues. This week it struck a data deal with Condé Nast, publisher of Vogue, The New Yorker, and Wired.
Comments from both sides of the partnership offer interesting insights into where news content is heading in an AI world. Publishers who survived the death of print media face choppy waters ahead.
The ‘You stole our data!’ saga continued unabated this week, as a group of authors filed a class action lawsuit against Anthropic. The authors claim that Anthropic used pirated books to train Claude.
The story behind the dodgy dataset is interesting and Anthropic is unlikely to be the only company that used it. The defense for creating the dataset with copyrighted books boils down to: ‘We really needed it.’
Smartphone reimagined
Google’s new Pixel 9 smartphone is smarter than ever. It’s packed with AI-powered features that deliver capabilities we’ve never had in our phones before. But is that a good thing?
The AI-enhanced camera system and Reimagine feature redefine the concept of using your phone to ‘capture the moment’.
These new features are impressive, but should we be embedding AI in everything when we’re not sure how to make it behave?
If the idea of a phone packed with AI makes you a little nervous, you might not be lining up to buy one of Unitree’s mass production robots.
Unitree G1 mass production version, leap into the future!
Over the past few months, Unitree G1 robot has been upgraded into a mass production version, with stronger performance, ultimate appearance, and being more in line with mass production requirements. We hope you like it.… pic.twitter.com/Hi4mL65d6z
— Unitree (@UnitreeRobotics) August 19, 2024
AI Autism detector
Autism can be difficult to diagnose in children, especially when they’re very young. However, early diagnosis can ensure a child receives timely interventions to improve their development and long-term outcomes.
Researchers in Sweden developed a machine learning model that uses basic medical and background information to detect Autism with impressive accuracy.
In other news…
Here are some other clickworthy AI stories we enjoyed this week:
Google says its Imagen 3 AI image generator beats DALL-E 3.
OpenAI launches fine-tuning for GPT-4o with 1M free training tokens per day. The world’s most powerful AI model just got better.
OpenAI says an Iranian group used ChatGPT to try to influence the US election.
Cisco posted $10.3 billion in profits last year but it’s still laying off 5,500 workers as part of an effort to invest more in AI.
Medical researchers claim their new AI algorithm can detect illness by looking at your tongue.
A startup led by former Google researchers is using AI to give computers a sense of smell.
Luma Labs launched its latest text-to-video tool, Dream Machine 1.5.
Dream Machine 1.5 is here Now with higher-quality text-to-video, smarter understanding of your prompts, custom text rendering, and improved image-to-video! Level up. https://t.co/G3HUEBE2ng #LumaDreamMachine pic.twitter.com/VQvfSTK0AI
— Luma AI (@LumaLabsAI) August 19, 2024
And that’s a wrap.
The unhinged nature of Grok 2 added some entertainment to our AI news week but I’m guessing companies like Disney are firing up their lawsuit machines. Do you think Grok needs taming, or should we be able to make whatever images we want?
I know we’re all tired of the bots, but should we be entertaining the idea of an agency giving us a ‘you are human’ stamp of approval so we can use the internet? What happens when the ‘AI has rights too’ movement inevitably starts up?
Let us know what you think, follow us on X, and please send us links to interesting AI research or news we may have missed.
The post DAI#53 – Wild pics, AI Swifties, and how to prove you’re human appeared first on DailyAI.
Source: Read MoreÂ