AI Advancements, Ethical Dilemmas, and the Future of Creativity (Weekly Crunch)
The Aim 002
The 30-second Takeaways
Claude 3.5 from Anthropic is very impressive and capable of complex tasks like job interview prep
Apple announced device-based AI, allowing the use of models like ChatGPT while prioritising privacy
The host prefers open-source AI models he can inspect over closed-source ones
Perplexity allegedly stole content for training, raising ethical concerns about responsible AI
Internet speeds may be slowing due to the high computing demands of training large language models on video data
AI video generation is advancing rapidly, with tools like Luma AI’s “dream machines” creating astonishing visuals
While exciting for creativity, the host reminds us that AI should augment and empower humans, not replace us
The Transcript
Enjoying the Great Outdoors While Pondering AI (A Friday Round-up)
Hey there, friends! It’s me again, recording my weekly AI round-up on this lovely Friday morning while out for a stroll. You might be wondering why I insist on recording outside instead of just sitting at home. Well, there are a couple of reasons for that.
First off, I’m a busy professional juggling a full-time job along with this passion project. So I have to take advantage of any free pockets of time I can find — mornings, evenings, lunch breaks. And you know what? Getting some exercise by walking to the office is part of my routine for maintaining good mental and physical health. It’s like meditation for me.
The second reason is that being out in nature simply promotes clearer thinking. There’s a reason “walking meditation” and “talk therapy” sessions involve getting some steps in. The gentle motions help the mind open up. So these outdoor recordings kill two birds with one stone for me — I get exercise and prime my brain for deep thoughts on the AI topics I’m so enthusiastic about.
Speaking of AI, let’s dive into the major developments I’ve been tracking this week!
Anthropic’s Claude 3.5 Steals the Show
One of the biggest AI product launches over the past few days has been Claude version 3.5 from Anthropic. I’ve been a huge fan of Claude since the early days when it spun off from OpenAI, positioning itself as a safer, more human-centric AI assistant.
With version 3.5, they’ve levelled up in a big way. This Sonnet variant is incredibly capable, rivalling and perhaps even exceeding ChatGPT 4.0 in my testing. I put it through its paces by having it prep an entire briefing document for my partner’s upcoming job interview — collating her resume, the job description, practice questions, etc. into a stellar preparation guide. Claude nailed the context-heavy task flawlessly.
So yeah, I’m incredibly impressed with Claude 3.5 and I expect to be using it as my go-to AI assistant for all sorts of tasks moving forward. I love Anthropic’s commitment to transparency too, with their open development of the different Claude model sizes.
Apple Steps Into AI (With Caveats)
The other major AI news this week came from Apple announcing their entrance into the on-device AI game. As a proud Apple fan myself, I’m excited about this development but also have some reservations.
On the plus side, having AI processing happen locally on devices is a big win for data privacy, which is extremely important to me. I don’t love the idea of my personal or professional info being siphoned off into the cloud databases of the major tech giants. An on-device AI is a safeguard against that.
However, from what I’ve heard so far, the first iteration of Apple’s offering will simply allow ChatGPT to be integrated into certain apps and services. Now don’t get me wrong, ChatGPT is great…but I was hoping for the ability to use other transparent, open-source AI models as well. Models like Lama 3 or Mistral, where you can inspect the training data and understand the principles they were developed under.
I don’t want to be limited to the “black box” closed-source models of a single company. Having a choice and understanding what you’re working with is crucial when it comes to mitigating AI biases and safety concerns. So I’ll be watching Apple’s moves in this space closely.
The Closed vs Open Source AI Debate
Speaking of that open vs closed source discussion, we’re starting to see the divide become an issue as more and more companies rush into developing their own competing large language models behind closed doors.
Just look at the alleged controversy around Perplexity AI this week. According to reports, they seem to have blatantly violated terms and conditions by scraping huge amounts of content from websites to build their training datasets — and not just that, but their AI has been caught essentially making up and hallucinating responses when it lacks sufficient data to answer queries properly.
If true, this is an incredibly unethical practice that throws responsible AI development out the window in the pursuit of being first to market. And it’s likely happening industry-wide behind the scenes, not just at Perplexity, as companies race to amass more and more data to make their models bigger and “better.” We’re seeing a modern-day AI gold rush with very little governance.
That’s why I lean towards using more transparent, open-source models whenever possible. At least I can inspect what principles they were developed under and decide if I trust the methodology, instead of just taking some company’s word that their black box is safe and unbiased.
The Internet is Hitting Its Limits?
With all this AI training happening, leveraging the internet as a core dataset, there are murmurs that we may quite literally be hitting the computational limits of our global network bandwidth. Have you noticed internet slowdowns yourself lately?
I’ve had terrible speeds and connectivity issues this week, and experts like Emad Mostaque have posited that it’s because all these huge language models are essentially DDoS’ing the internet by incessantly crawling and ingesting data from across the web in their training loops.
When you think about dozens or hundreds of the biggest tech companies and startups all doing this data hoarding in parallel, it’s not too hard to imagine they’re maxing out available computing resources. Especially as we move into multi-modal training that incorporates images, video, audio etc. This feels like a “tragedy of the commons” situation waiting to happen if no one governs this arms race.
But Let’s End on a High Note…
Okay, that’s quite a lot of concerns and ethical debates I’ve raised here. Let’s wrap up on a more inspiring, positive note, shall we? Because there have been some truly mind-blowing developments in the creative AI space as of late.
I’m talking about the photorealistic video generation from companies like Luma AI, which they humbly call a “dream machine.” We’re not just talking about static image generation anymore — Luma can create entire animated scenes from simple text descriptions that legitimate video teams would struggle to make look as polished.
It really does feel like we’re entering the age of “visual magic” through AI. And while we can’t yet generate feature-length films at the push of a button, these technologies open up amazing new avenues for creative professionals, storytellers, artists and more. Their imaginations are no longer bounded by manual production constraints.
At the end of the day, AI is a tool to augment and empower human creativity and capabilities — not replace them. We are the dreams from which these “dream machines” spring forth. As we move forward responsibly and ethically with AI development, the horizons of what’s possible for us as a species have never looked brighter.
Whew, that’s a wrap for my weekly AI round-up! Thanks for joining me on this lovely Geneva morning. Until next time, keep dreaming big, friends!
Thanks and acknowledgements
This Podcast was produced with passion and love in green Geneva, Switzerland. It is proudly sponsored by Valeris Coaching, and primarily produced and delivered by senior coach Lucas Challamel, as part of his Youtube Channel, The Camel Hall.
https://poe.com/Claude-3-Opus, contributed to brainstorming and as a sounding board for some conceptual propositions.
The video was patiently edited with the free version of CapCut, including sound fx, stickers and automated captions (Such a great feature!)
THE AIM: Navigating the Philosophical Frontier of Augmented Intelligence 🧭
Join Lucas Challamel, a veteran tech leader and CTO, as he guides you through the existential questions raised by the rapidly evolving world of artificial intelligence (AI). This is not just another podcast about the technical nitty-gritty — THE AIM dives into the profound implications AI will have on our lives, businesses, and society as a whole.
🌊 AI’s Disruptive Wave
From OpenAI to Google, the AI revolution is swiftly crashing upon us in an unstoppable wave of innovation and disruption. Billions are being invested as tech titans race to develop ever-more-powerful AI systems. But are we truly prepared for the philosophical ripples this tidal force will send through industries, governments, and communities worldwide?
📜 The Next Chapter in Human Intelligence
Throughout history, groundbreaking innovations like the printing press, the Industrial Revolution, and the computer age have radically reshaped civilisation’s trajectory. Now, AI represents the next seismic reinvention of intelligence itself.
Just as Gutenberg’s press democratised access to information, AI promises to democratise human knowledge and creativity on an unprecedented scale. This AI-powered rebirth will make intelligence universally accessible like never before — a revolutionary force that both inspires awe and demands careful examination.
💡 Illuminating the “Dreaming Machines”
Let’s be clear — today’s AI is not sentient like biological intelligence. These systems are more akin to hyper-advanced “dreaming machines” that generate plausible outputs by recognizing patterns in vast datasets, similar to how our sleeping minds weave dream imagery from our experiences.
While incredibly powerful, today’s AI lacks human capacities for reason, emotional intelligence, and open-ended innovation…for now. But even at this stage, AI raises profound questions. How will these “dreaming machines” impact education, creative industries, and our collective sense of reality?
🧠✨ Augmenting Human Potential
Rather than replacing us, AI’s ultimate potential may lie in augmenting human intelligence as a complementary tool. Imagine having a personal AI assistant to access the world’s skills and knowledge on demand. Or leveraging AI simulations to solve complex multivariable challenges. Or customized AI tutors tailoring lessons to each student’s unique needs.
By amplifying our natural abilities, AI can uplift the entire scope of human potential, accessibility, and achievement. Open-source AI initiatives hint at a future where these augmentation capabilities are available not just to big tech, but to everyone.
⚖️ Navigating AI’s Ethical Implications
Of course, realising AI’s promise will require carefully navigating a minefield of ethical landmines around privacy, surveillance, job displacement, IP rights, disinformation, and more. As AI blurs the lines between real and artificial, truth and fiction, we must develop guidelines to wield this power responsibly.
THE AIM brings a human-centric, philosophical perspective to these challenges. Through expert interviews, deep dives, and open discussions, we’ll cut through the hype to explore how best to cultivate AI as a force for individual empowerment and positive global impact.
💥 Embrace the Future of Intelligence
So brace yourself for a mind-expanding journey into AI’s profound existential frontier. Whether you’re a tech innovator, business leader, student, or just intensely curious about the future of intelligence, THE AIM is your guide through both the awe-inspiring possibilities and risks that await.
Join us as we celebrate humanity’s unbounded potential for creativity, empathy and growth — because even as we augment ourselves with AI’s power, our unique strengths as a species will remain indispensable.
Buckle up and get ready to dream alongside the “dreaming machines!” The AI revolution is coming…and Lucas Challamel’s THE AIM will prepare you for its world-shaping impact.
#AI #ArtificialIntelligence #AIPodcast #TechPodcast #FutureTech #EmergingTech #Innovation #Disruption #AugmentedIntelligence
#PhilosophyofAI #ExistentialAI #Ethics #TechEthics #Futurism #Singularity #DreamingMachines #GenerativeAI #LargeLanguageModels