So far in 2024: AI innovation, regulation, and the ethical frontier
As generative AI and emerging technologies continue to surge forward at breakneck speed, 2024 has so far been a year of introspection, regulatory shifts, and ethical advancements.
Contact info for Dr. Pardis Shafafi
Contact info for Miguel Sabel
Global Director of Strategy and Sustainability
Since the initial excitement over groundbreaking AI capabilities, the global community has moved towards a more nuanced understanding of generative AI and emerging technologies, underscored by a focus on regulation, inclusion, and ethical responsibility. This article revisits our predictions from the end of 2023 to examine the progress and challenges of AI and the collaborative efforts to shape a technology-driven future that aligns with societal values and ethical standards.
AI regulation: a catalyst for innovation?
Given the rise in AI regulation in 2024, it remains to be seen whether these measures will be a setback to businesses or a progressive move for both individuals and society at large. So far, we’ve seen regulations evolving to protect users and society while encouraging technological advancement and innovation. This evolution has shown that legislative frameworks can, indeed, foster an environment conducive to innovation, balancing oversight with creative freedom.
The introduction of the EU's AI Act has set a precedent for global AI regulation, emphasising a nuanced approach to managing the technology's risks while fostering its development. This groundbreaking legislation categorises AI applications by the level of risk they pose, mandating stricter controls on high-risk applications in critical sectors like healthcare, education, and policing. The AI Act's swift implementation has spurred AI companies to adopt more rigorous development practices, ensuring their models are transparent, accountable, and minimise bias.
In the US, legislative activity has also been noteworthy. In New York, for instance, proposed bills aim at regulating Automated Employment Decision Tools (AEDTs), requiring annual bias audits and ensuring meaningful human oversight to mitigate discrimination and promote fairness in employment decisions. These legislative measures reflect a broader commitment to maintaining ethical standards in AI's deployment.
That said, whenever legislation around new technology is introduced, it tends to follow an expected script. Legislators typically move forward slowly, businesses raise their concerns, and activists demand more. As these stories play out, they tend to hide ‘plot twists’ such as some companies finding (or creating) loopholes, undesired scandals, and innovators leveraging the opportunity. We'll continue to watch this story unfold closely, as we believe AI legislation will be one of the factors that shapes the landscape in the mid-term.
Interaction beyond the horizon
We anticipated new human-machine interaction models: liquid, more intimate, and able to enhance us. Although the level of interaction innovation we expected to see hasn’t fully materialised, the pace of innovation remains unbridled. New interfaces are being brought to market continuously at an astonishing pace – the whole industry tries making sense of them as they appear, with new taxonomies popping up every week. This is probably as necessary as futile: today's useful map will be deprecated tomorrow.
As new interfaces continue to emerge, it will be important to challenge conventional paradigms and continuously reevaluate how we learn from and leverage these advancements. This period marks a renaissance for interaction design, proving its resilience and relevance even as many came to believe that the label was no longer relevant.
Navigating the hype cycle to reach accelerated maturation
We expected to see more pressure on the refinement of generative AI-enabled products and their material impact, effectively showing the signs of a more mature technology. Has that happened or are we in the same stage of the hype cycle? The reality is nuanced.
Looking at the stock market, investors seem to be patient about the evolution of the hype – and more importantly, they have money to back it. Even so, there are signs that scrutiny is increasing, reflected in research and commentary that questions the financial viability and ethical implications of AI applications. For example, an MIT study that analysed computer vision found that when investment requirements are considered, only 23% of human tasks can be replaced by AI in a way that makes financial sense. A recent McKinsey study concluded that “capturing generative AI’s enormous potential value is harder than expected.”
This scrutiny could be a healthy sign, potentially averting unchecked ‘AI euphoria’ and promoting more sustainable development.
The AI arms race continues to evolve
We hoped for more distributed innovation, moving away from a de facto oligopoly and towards a richer ecosystem. Have new players and solutions amazed us?
The pace and form of product releases, from the usual suspects and aspiring challengers, is still amazing. Yet a question is starting to take shape for all, including those challengers: where is business model innovation? Defining new ways to create and share value has gone hand in hand with key moments of technological innovation, and we are not seeing that yet. The fact that some products don't have an explicit revenue model is even more concerning. Are humans going to be the product – again?
Overall, the hoped-for diversification of the AI innovation ecosystem remains an unfolding narrative. We think the extent to which new players have disrupted the field and enriched the technological landscape with novel solutions is yet to be fully appreciated.
Expectations about AI models: Ensuring safety and trust
We hoped for an evolution of commercially available models not only to be more powerful, but also to help users understand, trust, and effectively manage these technologies. How have models and products evolved to ensure they cause no harm?
Broadly speaking, ethics assurance processes are becoming more prominent in business practices due to general consumer and employee demand and a shifting policy landscape with increasing attention to rights and climate degradation. We see this in our work at Designit as clients are becoming more receptive and eager to understand and adopt frameworks like Do No Harm. Although it’s too soon to know how this has affected specific new products, the litany of AI-specific policies, which, after some years in discussion, are finally coming into effect (including EU regulations), will at the very least ensure that demonstrable harm can be held to account.
As for measures like bias mitigation and fairness: Ample public discussion around these points in traditional and social media may be indicative of the wide-reaching acknowledgment and therefore (we can hope) a more informed mindset shift around the biggest blind spots in this and other emerging technology.
What's next?
The journey of AI through 2024 reveals a complex relationship involving technological evolution, societal impact, regulatory adaptation, and ethical considerations. While some predictions have crystallised into reality, others remain in flux, underscoring the continuous interplay between innovation and its broader implications.
Do you want to explore the potential of gen AI and emerging tech for your brand? Let's work together.