top of page

Early August AI News: Trends, Breakthroughs, and Innovations

This week's AI news brings you the latest on the swift advancements in AI technology and their widespread implications. From how we use AI in the workplace to the ethical considerations of how AI companies train their models. Regardless of these challenges, it's essential to highlight the positive aspects and groundbreaking capabilities of AI that are making tremendous strides.


Image generated by DALL-E, prompted by Sam van Leeuwen of Intelligence Assist
Image generated by DALL-E, prompted by Sam van Leeuwen of Intelligence Assist

TL;DR:


  • Workplace AI adoption surges with 75% of employees using AI tools, often ahead of company policies.

  • OpenAI's SearchGPT and GPT-4o mini reshape the AI landscape for businesses of all sizes.

  • Model degradation and illegal data scraping affect major AI players, raising concerns about AI sustainability and data privacy.

  • Mark Zuckerberg criticizes OpenAI's shift towards a closed-source model, intensifying the open vs. closed AI debate.

  • AI is expanding frontiers with recent breakthroughs making innovative strides in various fields, from driving fashion campaigns to enhancing medical diagnostics.


Written by Sam van Leeuwen, in combination with Perplexity and Claude to augment news blog creation


As the newest member of the Intelligence Assist team, I'm excited to bring you my first blog post. It's only my second week here, and I'm already diving deep into the fascinating and intrinsic world of AI. In this edition of the fortnight AI news blog, I have curated the most relevant and impactful AI developments specifically for businesses like yours.


 

The AI Workplace Revolution: Employees Lead the Charge AI news


Recent research reveals a growing trend of employees embracing AI tools in the workplace, often ahead of formal company policies. According to the 2024 Microsoft Work Trend Index, 75% of employees are already using AI at work, with usage nearly doubling in the last six months. This aligns with findings from UiPath, which showed that 44% of Australian knowledge workers are utilising generative AI in their jobs.


However, there's a notable gap between employee adoption and organisational readiness. The Microsoft study found that while 79% of leaders acknowledge AI as a business imperative, 60% lack strategic implementation plans. This disconnect is further highlighted by the fact that 78% of employees are bringing their own AI tools to work, often without company approval.


Major companies are responding to this trend by introducing their own AI tools. For instance, JPMorgan Chase has rolled out a generative AI product called LLM Suite to employees in its asset and wealth management division. This tool, described as comparable to working with a research analyst, assists with tasks such as writing, idea generation, and document summarisation.


At Intelligence Assist, we believe that the key to both adoption and responsible use of AI lies in clear communication and providing the right tools. Companies need to explicitly outline what AI tools are not permitted and offer approved alternatives that meet both employee needs and organisational security requirements. This approach can help bridge the gap between employee enthusiasm for AI and the need for strategic, secure implementation in the workplace.


 

OpenAI's Latest Moves: Shaking Up the AI Landscape


As the AI workplace revolution continues, major players like OpenAI are making significant strides in developing new tools and capabilities.


OpenAI's recent announcement of SearchGPT, a prototype search engine, has sparked discussions about its potential impact on existing AI-powered search tools like Perplexity. However, it's premature to consider SearchGPT a "Perplexity killer" just yet. OpenAI has only released this functionality to a limited group of users for testing, following a trend of announcing exciting features but delaying widespread access, as seen with their voice capabilities.


OpenAI's release of GPT-4o mini, a cheaper and faster model, demonstrates their ability to rapidly deploy new technologies beneficial for small and medium-sized enterprises (SMEs). The key feature of GPT-4o mini is its ability to provide advanced AI capabilities at a fraction of the cost of larger models. This accessibility allows SMEs to leverage powerful AI tools for tasks such as data analysis, content creation, and customer service without significant financial investment.


This all raises questions about the need for multiple AI tool subscriptions. While ChatGPT, Perplexity, and Claude.ai each offer unique capabilities, for most small businesses, a single tool may suffice.


table of content: Tool comparison of OpenAI's ChatGPT, Perplexity.ai, & Claude.ai

While each tool has its strengths, the choice depends on specific needs. ChatGPT offers versatility and image generation, Perplexity excels in internet-integrated searches, and Claude.ai handles larger context windows and complex coding tasks. For most users, one tool should suffice, but power users may find value in maintaining multiple subscriptions to leverage each platform's unique capabilities.


 

The Dark Side of AI: Ethical Concerns and Challenges


As AI tools become more prevalent in the workplace, it's crucial to understand the challenges facing the industry.


Large Language Models (LLMs) have revolutionised the field of artificial intelligence, but they're not without their challenges. Two significant issues currently plaguing LLMs are model degradation and illegal data scraping. These problems affect all major AI models, creating a confusing landscape for businesses and individuals trying to navigate the world of AI.


Model degradation refers to the phenomenon where AI models become less accurate or reliable over time. A study from MIT, Harvard, The University of Monterrey, and Cambridge found that 91% of ML models degrade over time. This "AI aging" can occur even when there are minimal changes in the data the model is working with. For teenagers, think of it like a student who aces a test but gradually forgets the material over time, performing worse on future tests covering the same topics.


Illegal data scraping, on the other hand, involves AI companies collecting data from various online sources without proper authorisation or consent. It's like copying someone else's homework without asking - it might help you complete the assignment, but it's not ethical or legal. This practice has raised significant privacy and copyright concerns.


Here's how these issues are affecting some major LLMs:

  1. X's Grok: X (formerly Twitter) is using user data to train Grok, raising privacy concerns.

  2. Apple's AI: Apple, along with other companies, used a dataset of YouTube video subtitles for training without authorisation.

  3. Anthropic's Claude: ClaudeBot has been reported to scrape websites aggressively, sometimes ignoring web scraping rules.

  4. Runway's Gen-3: This video generation tool was trained on scraped YouTube videos and pirated media.

  5. Meta's AI: The company faced criticism for planning to use personal data from Facebook and Instagram users for AI training without explicit consent.

  6. OpenAI and Perplexity: These companies, along with others, are grappling with the challenge of model degradation.


While these issues may seem alarming, it's important to note that the AI industry is aware of these challenges and is working towards solutions. For instance, Meta paused its plan to use user data for AI training in Europe after facing regulatory scrutiny. The key takeaway is that as the AI field evolves, so too will the ethical and technical standards governing it. Awareness of these issues is the first step towards addressing them and building more reliable, ethical AI systems for the future.


 

Under the Bonnet: Understanding AI Model Accessibility


As the AI industry grapples with ethical challenges, another debate is heating up around the openness of AI models.


In a recent development in the ongoing AI language model wars, Mark Zuckerberg has joined the chorus of critics questioning OpenAI's commitment to openness. The Meta CEO pointed out the irony in OpenAI's name, given that it has become "the leader in building closed AI models". This criticism echoes sentiments previously expressed by Elon Musk, who co-founded OpenAI but has since become a vocal critic of the organisation's shift towards a closed-source, profit-driven model.


This debate about "open" versus "closed" AI models is causing quite a stir in the tech world, and it's important for small businesses to understand what it means. Think of it like two types of cars. Open-source AI models, like the ones Meta is working on, are like cars where you can open the bonnet and tinker with the engine. Anyone can look inside, modify parts, or even improve the engine's performance. This approach can lead to more innovation and transparency, as many mechanics can contribute to making the car better. On the flip side, closed or proprietary models, like those from OpenAI and Google, are more like cars with sealed bonnets that you can't open. The manufacturers keep the inner workings hidden, arguing that this prevents misuse and ensures the car runs as intended. While supporters of open-source say it leads to faster progress and makes AI more accessible to everyone, those favouring closed models believe tight control is necessary for safety and reliability. As this debate continues, the AI industry is trying to figure out what "openness" really means in AI development. For small businesses, this conversation is crucial as it will shape the AI tools available to you in the future, affecting how accessible, customizable, and trustworthy these tools will be.


 

AI's Expanding Frontiers: Showcasing Recent Breakthroughs


Despite the challenges and debates, AI continues to make remarkable progress across various fields.


AI Conquers Complex Mathematics


Google DeepMind has made a breakthrough in AI's mathematical capabilities. Their systems, AlphaProof and AlphaGeometry 2, successfully solved four out of six problems from the International Mathematical Olympiad, earning the equivalent of a silver medal. This achievement demonstrates AI's growing ability to handle advanced reasoning and complex problem-solving, potentially transforming fields like research and academia.


To put this in perspective, AI has been making similar strides in other complex domains. For instance, an AI model recently outperformed top Mahjong players, reminiscent of when IBM's Watson beat the world chess champion years ago. While we might feel a twinge of sympathy for the bronze medalist at the Math Olympiad, these achievements are clear indicators of AI's rapid progression.


AI Drives Autonomous Drifting


In a fascinating collaboration, Toyota Research Institute and Stanford University have developed AI-powered Supras capable of autonomous tandem drifting. While impressive as a demonstration, this technology has serious implications for developing advanced safety features in consumer vehicles. The AI's ability to control vehicles in extreme conditions could lead to improved handling on slippery surfaces like snow or ice. Previously, there were concerns that autonomous vehicles were not as responsive as humans. However, we are now seeing that AI is making autonomous vehicles even better than humans in certain scenarios.


AI-Generated Fashion Campaigns


Mango, the Spanish fashion giant, has launched a groundbreaking campaign for its Mango Teen line, created entirely using artificial intelligence. The campaign, available across 95 markets, showcases AI's potential in creative industries. By training AI models on real photos of clothing items, Mango has demonstrated how AI can be integrated into design, art, and marketing processes. What's particularly noteworthy is Mango's transparency about their use of AI in this campaign, setting a positive example for ethical AI adoption in creative fields.


AI Enhances Medical Diagnostics


Artificial intelligence is making significant strides in healthcare, particularly in medical diagnostics. Researchers have developed a machine learning model called "Mirai" that uses traditional mammograms to predict breast cancer risk. This development underscores the potential of AI to revolutionise early detection and prevention in healthcare, potentially saving countless lives.


 

A Bit of AI Humor: Perplexity's Cheeky Search Tool


To end on a lighter note, let's look at a fun development in the world of AI-powered search.


Perplexity creator Aravind Srinivas has shared a humorous tool that generates a link to automatically enter a friend's search question into Perplexity for them: "Let me Perplexity that for you." This playful feature adds a touch of sass to the often serious world of AI, reminding us that even as AI capabilities expand, there's always room for a bit of fun.


As we navigate the complex landscape of AI adoption, ethical concerns, and expanding capabilities, it's refreshing to see that the AI community hasn't lost its sense of humor. After all, whether we're dealing with open-source engines or sealed bonnets, sometimes the best response to a friend's easily Perplexity-able question is a cheeky, AI-powered nudge.


Today's news blog covered a wide range of AI developments, both promising and challenging. It remains the question to see how this will unfold and the direction it will take. However, as AI continues to evolve it is crucial to stay informed and not let the rapid pace leave your business behind. As an SME, you have the agility to adapt quickly and gain a competitive edge. Take the first step today – speak with an Intelligence Assist expert and simplify your AI journey!

Comentarios

Obtuvo 0 de 5 estrellas.
Aún no hay calificaciones

Agrega una calificación
bottom of page