Welcome to our weekly roundup of human-generated AI news.
This week AI tested ethical and trust boundaries.
A synthetic cancer worm might send your next email.
And AI policy could decide the next US election.
Let’s dig in.
AI vs. ethics and trust
As tech companies chase AI supremacy, they seem more concerned with ‘Can we?’ than ‘Should we?’
Recent actions by Microsoft, NVIDIA, and Apple show how the tech industry is skirting ethical boundaries and trading trust for data and talent.
You might like the idea of AI handling the boring parts of your job, but would you consider it a colleague?
Software company Lattice became the first to give a measure of workers’ rights to its enterprise AI tools but it didn’t go down well with the humans.
AI worms
Researchers created an AI-powered ‘synthetic cancer’ worm to raise awareness of a new frontier in cyber threats.
The way the worm uses GPT-4 to rewrite, hide, and distribute itself is equal parts fascinating and terrifying.
NATO is taking AI threats seriously. It recently released a revised strategy to combat AI threats and offers insight into nervous backroom discussions taking place in Europe.
The strategy raises some surprising concerns and also highlights the disconnect between national and corporate AI interests.
While nation-states prepare for AI attacks from their adversaries, maybe they should be more concerned about AI that doesn’t share their views on patriotism or borders.
My bad guys https://t.co/csoFVR5Gzz pic.twitter.com/jYexgHenwq
— AI Notkilleveryoneism Memes (@AISafetyMemes) July 12, 2024
Red pill or Blue pill?
As the US election hots up, Trump allies are preparing a “Make America First in AI†framework to roll back Biden’s regulations and kick off a series of AI “Manhattan Projectsâ€.
Could AI policy swing the powerful tech sector vote from Blue to Red? The proposed policy would remove a lot of regulations that AI developers currently face.
Politics may be important, but in Silicon Valley cash is king.
Another ongoing US battle sees AMD fighting NVIDIA for a slice of the AI pie. AMD bought the private Finnish AI lab Silo AI in a $665 million cash deal that gives it an edge it didn’t have before.
Doing business in the EU is getting increasingly tricky for AI companies due to mounting data regulations. Meta is expected to release its big Llama 3 400B multimodal model next week but says it won’t be making it available in the EU.
Taking the Mickey
Hacktivists stole a bunch of corporate and creative data from Disney’s internal Slack channels.
The hackers claim their actions were in protest of artists’ rights being compromised as Disney and other companies increasingly embrace AI in their creative processes.
Disney will no doubt publicly support their human artists while in the boardroom they’ll whisper, ‘Hey, have you seen how much money we can save if we replace people with AI?’
Will we see similar cyber protest action as AI competes with music artists?
This week AI made it easier for us to explore and find new songs to listen to. YouTube Music and Deezer are testing new AI-powered search tools that let you describe the playlist you want or even hum to search for that song title you can’t quite remember.
Playing doctor
AI is helping to diagnose diseases, create new drugs, and analyze medical imaging. But in the excitement of these advancements, are we missing something important?
Scientists are calling for ethical guidelines to regulate LLMs as they play wider roles in healthcare. Could we have more of those for Big Pharma CEOs too, please?
Researchers attempted to use AI to help resolve the debate over the relationship between biological sex and gender identity.
When they used AI to analyze children’s fMRI brain scans they had interesting results in predicting biological sex and self-reported gender.
The human brain and AI models are frustratingly similar in one aspect: They’re often inscrutable black boxes.
When ChatGPT gives you the right answer, how does it arrive at it? Are AI models capable of reasoning or do they simply recite and rework their training data?
Researchers performed some interesting experiments to answer that question.
In other news…
Here are some other clickworthy AI stories we enjoyed this week:
Whistleblowers say OpenAI illegally barred staff from airing safety risks.
OpenAI is reportedly working on a new reasoning technology codenamed ‘Strawberry’. (Call us when rumor becomes fact.)
AI makes writing easier, but the stories end up sounding alike.
AMD revealed the details of its latest AI PC architecture using neural processing units (NPU).
AI’s ‘Oppenheimer moment’: autonomous weapons enter the battlefield.
Trump’s VP pick, JD Vance, wants looser AI regulations but more Big Tech accountability.
OpenAI co-founder Andrej Karpathy has started an AI education company called Eureka Labs.
Excited to share that I am starting an AI+Education company called Eureka Labs.
The announcement:
—
We are Eureka Labs and we are building a new kind of school that is AI native.
How can we approach an ideal experience for learning something new? For example, in the case… pic.twitter.com/RHPkqdjB8R
— Andrej Karpathy (@karpathy) July 16, 2024
And that’s a wrap.
Do you think researchers should be creating AI-powered ‘synthetic cancer’ worms to show what bad actors could potentially be making?
It definitely has virus gain-of-function research vibes. An AI lab leak seems almost inevitable if it hasn’t already happened.
If you’re voting in the US election, will AI policy be enough to change your vote? Two really old guys deciding AI development policy might not be the best way to go either way.
This week was decidedly light on Meta, Google, and OpenAI news. Could we be in for a bumper crop next week?
Let us know what you think, connect with us on X, and please send us juicy AI links we may have missed.
The post DAI#48 – Disney hacks, Red vs. Blue, and AI cancer worms appeared first on DailyAI.
Source: Read MoreÂ