AI's Balancing Act of Promise and Peril

Tomorrow Bytes #2344

This week’s newsletter examines this pivotal moment in AI discourse marked by President Biden's executive order and the UK's AI summit focused on existential risks. It provides byte-sized breakdowns on key AI advances, issues, and signals across business, research, and responsible AI. With growing calls to ensure safety and accountability amid rapid progress, the briefing explores AI's future as it reaches an inflection point.

As AI becomes even more interwoven into our daily lives, Tomorrow Bytes is here to shed light on its transformative journey. Let’s get into it…

📌 Byte-sized Breakdowns

🔦 Spotlight Signals

AI Poised to Radically Reshape IT Industry: Areas seeing rapid change include customer service chatbots, predictive maintenance, automated security threat detection, and software development tools. [Dive In]

AI Art Generators Score Win in Copyright Lawsuit: In an ongoing legal battle over AI-generated art, Midjourney, Stability AI, and DeviantArt prevailed over artist plaintiffs claiming copyright infringement. [Dive In]

UK PM Sunak to lead AI summit talks before Musk meeting: Sunak faces backlash for planning to meet with Musk, whose recent Twitter conduct and layoffs sparked concerns. [Dive In]

Scarlett Johansson threatens legal action: As generative AI creates new intellectual property dilemmas, the suit highlights an emerging issue - the use of celebrity images by AI models. [Dive In]

Investor Frenzy Risks Overvaluing AI Startups: With billions pouring into AI, deals are being struck at "nosebleed" prices far exceeding revenues. But most AI upstarts have unproven business models. [Dive In]

Microsoft Moves to Make AI Systems More Socially Aware: Microsoft is working to make systems like ChatGPT more cautious and socially aware. Features aim to curb offensive answers and misinformation by better modeling appropriate conduct. [Dive In]

AI System Can Diagnose Diabetes from Voice in Seconds: The system identified acoustic biomarkers linked to diabetes with 85% accuracy. An approach to frequent diabetes screening through smartphone voice analysis. [Dive In]

Microsoft Pushes Boundaries of Small AI Models: MiniLM and MicroLLM show impressive performance at a fraction of the size of behemoths like GPT-3. The techniques open AI applications on devices with limited resources. [Dive In]

💼 Business Bytes

Are AI Safety Measures Slowing Innovation?

A philosopher argues that techniques like AI attenuation, meant to make models safe, also constrain their capabilities. He contends that safety protocols like censoring AI training data inhibit discoveries. However, proponents say prudent measures are needed to prevent harm while allowing progress. The debate continues, balancing AI advances and risks. [Dive In]

AI Named 2022 Word of the Year by Collins Dictionary

Collins Dictionary has named AI its word of the year for 2022, underscoring its growing prominence. Defined as "the theory and development of computer systems able to perform tasks normally requiring human intelligence," AI saw an exponential increase in usage as generative models like DALL-E went mainstream. The choice reflects AI's cultural impact. [Dive In]

VC Firm Bets Big Across Expanding AI Landscape

A VC firm run by ex-Googlers is making broad bets across the proliferating AI landscape. Areas span from large language models like Anthropic to startups applying AI to pharma like Insitro. The firm sees endless potential as narrow AI transforms industries. With this diversified portfolio, it aims to capture value as AI unlocks new products, businesses, and automation capabilities. But the excitement comes with risks if hype outpaces real impact. [Dive In]

ChatGPT's Soaring Revenue Faces Rising AI App Rivals

ChatGPT has dominated AI app revenues, raking in an estimated $386,000 daily just two months after launch. But rivals are catching up fast, with AI writing tools like Rytr and Foundry climbing the charts. As the AI app market expands beyond chatbots, consumers have more choices tailoring AI to specific use cases. While ChatGPT maintains its lead for now, its stratospheric growth may be attenuated as competitors seize niche opportunities. [Dive In]

☕️ Personal Productivity

Knowledge Assistant Uses AI to Answer Questions

A new AI knowledge assistant created by Anthropic can answer natural language questions on any topic by searching the web. It summarizes information from credible sources into concise responses. This showcases how far conversational AI has come in understanding queries and retrieving relevant information. [Dive In]

Grammarly Adds Tone Tuning to Make AI Writing More Human

Grammarly has launched an AI feature called Tone Tuning to make suggestions sound more natural and human. It adjusts wording to reduce repetitiveness and robotic phrasing when using Grammarly's writing assistance. This enhancement addresses a major critique of AI writing aids - that they lead to stiff, formulaic text. [Dive In]

Quora Launches AI Chatbot Platform for Its Creator Network

Quora has introduced Poe, a no-code AI chatbot platform for creators on its network. Poe lets subject matter experts quickly build customized chatbots to engage their audiences. This chatbot creator economy model capitalizes on Quora's in-depth user knowledge base to make AI more accessible. [Dive In]

How to Fight Back When AI Steals Your Content

As generative AI proliferates, copyright issues abound with AI models training on scraped content. But creators are finding ways to outwit them by adding fake "honeypot" text that trips up AI. Other strategies include adding watermarks and deliberate errors. The fight against content scraping is just beginning. [Dive In]

🎮 Platform Plays

Google Paid Apple $26B in 2021 to Remain Default iOS Search

Google paid Apple a staggering $26.3 billion in fees in 2021 to remain the default search engine on iOS devices, per newly unredacted court filings. This massive payout highlights the companies' controversial search partnership amid ongoing antitrust scrutiny. Critics argue the deal stifles competition, but both giants benefit handsomely from their entrenched positions. [Dive In]

Box Speeds Document Processing with Generative AI

Box has launched new generative AI capabilities powered by Vertex AI to automate document analysis and data extraction. The tools can rapidly process contracts, invoices, and other documents, extracting key fields using natural language prompts. This allows customers to analyze documents 5x faster while reducing manual data entry, providing a significant productivity boost for document-heavy workflows. [Dive In]

Google Brings Generative AI Tools for Product Images

Google has launched generative AI tools for product photography in the US, enabling advertisers to create unique product images. The AI can synthesize thousands of high-quality photos with realistic variations in angles, lighting, and more. This expands Google's suite of AI advertising products as generative models proliferate. [Dive In]

Apple Bets on Custom AI Chips to Compete in ML Space

Apple is making a big move into AI silicon, designing custom chips to power on-device machine learning. With the M2 UltraFusion, Apple aims to accelerate Core ML performance dramatically versus its A-series chips. This hardware investment signals Apple's intent to compete in AI amid rapid progress from rivals like Google and Meta. [Dive In]

🤖 Model Marvels

Better Proteins for Better Drugs: DeepMind's AlphaFold 3

DeepMind's latest AlphaFold 3 model predicts protein structures with greater accuracy, increasing its utility for drug discovery and development. By better modeling protein dynamics and interactions, AlphaFold 3 enables more effective in silico screening and binding affinity predictions. This leap could accelerate pharma R&D and unlock new therapies. [Dive In]

MimicGen: Facebook's AI Learns Robot Skills from Videos

Facebook AI has developed MimicGen, a system that can watch human videos and then generate synthetic training data to teach robots new skills. By learning from diverse demonstrations, MimicGen can acquire complex behaviors like sweeping and bucket pouring more efficiently than traditional imitation learning. This scalable approach could expand robots' capabilities through easily sourced video data. [Dive In]

Google Rolls Out MetNet-3, Its Advanced Neural Weather Model

Google has launched MetNet-3, a state-of-the-art neural weather prediction model now powering forecasts in Google products globally. MetNet-3 improves precipitation forecasting by 10% over prior versions, with better performance on short-term forecasts within 6 hours. Its ensemble approach and physics-informed architecture enable more granular, accurate predictions to serve users. [Dive In]

Microsoft's LeMa Teaches AI-Language Like a Child

Microsoft has developed LeMa, an AI learning method that mirrors human language development stages. LeMa combines self-supervised learning and external guidance to acquire linguistic skills incrementally. This approach could lead to more efficient, generalizable language AI with greater reasoning abilities. Microsoft aims to make LeMa openly available to advance natural language processing. [Dive In]

🎓 Research Revelations

AI Safety Summit Convenes Diverse Perspectives on Progress

The annual AI Safety Summit virtually convened tech leaders, ethicists, policymakers, and others to collaborate on plotting a responsible path for AI development. Ongoing dialogue between these diverse perspectives is crucial to balance innovation with ethics and safety as this transformative technology advances. [Dive In]

Bias Persists Even After Biased AI Is Removed

A troubling new study shows humans working with biased AI absorb discriminatory assumptions that linger even when the problematic algorithm is no longer used. This highlights the critical need for tech companies to proactively audit AI systems for fairness and unintended bias to avoid perpetuating injustice. [Dive In]

AI Advances Create New Content Moderation Challenges

As AI improves at moderating harmful online content, new challenges arise around issues like transparency in automated decisions and appropriate human oversight. Companies relying on AI for moderation must ensure technology enhances, not replaces, human insight on nuanced and socially complex moderation decisions. [Dive In]

Transformative AI Innovation in Diabetes Management and Care

Exciting new AI applications enable more personalized, predictive, and preventative diabetes care, from glucose level forecasting to early detection of complications. Thoughtfully combining human expertise with the new capabilities of AI technology has the potential to improve outcomes for the millions living with diabetes profoundly. [Dive In]

🚧 Responsible Reflections

Don't Rush to Police AI - We Risk Stifling Innovation

Governments are understandably concerned about the potential dangers of AI, but rushing to impose rigid regulations could inadvertently stifle beneficial innovation. We need nuanced laws and oversight that thoughtfully balance ethical risks with allowing progress in this fast-moving field. [Dive In]

Current Real-World Harms Should Be AI's Top Priority

Fears of AI turning against humanity make headlines, but AI is already causing significant harms in the here and now, from biased algorithms that amplify injustice to social media that damages mental health. Before speculating about future sci-fi scenarios, the tech industry needs to take responsibility for addressing these urgent real-world issues. [Dive In]

The Humans Behind the AI - Fair Treatment is Key

Much of what we call "artificial intelligence" relies heavily on unseen human labor, from content moderators to data labelers. As AI systems become more capable, ensuring the ethical, fair treatment of the human workers enabling these technologies will only grow in importance. [Dive In]

Blunt AI Plagiarism Detectors Ensnare Innocent Writers

Automated text analysis tools designed to catch plagiarism often make false accusations against innocent writers instead. This highlights the need to audit AI systems for fairness, implement appropriate human oversight, and recognize the nuances that algorithms may miss. [Dive In]

We hope our insights sparked your curiosity. If you enjoyed this journey, please share it with friends and fellow AI enthusiasts.

Until next time, stay curious!