Insights

Takeaways from MIT Sloan 2025 AI Conference:
“Age of Implementation”

Members of Choate’s Patent and IP team attended the MIT Sloan 2025 AI Conference at the MIT Media Lab, on February 7, 2025. This year’s conference was themed “Age of Implementation,” and featured keynote talks from Ramin Hasani (Co-founder & CEO, Liquid AI), Vinod Khosla (Founder, Sun Microsystems; venture capitalist), and Swami Sivasubramanian (VP, AI & Data at Amazon Web Services (AWS)). The conference also included talks from start-up founders, academics, and members of state and federal government, as well as leaders from mega-cap companies (Microsoft, Google, Meta, Open AI, etc.).

Below are takeaways from the conference. These represent the views of the conference speakers.

Policy More Bullish on AI

Financial markets recently reacted (on January 27, 2025) to reports that the Chinese-developed “DeepSeek” model performs accurately AND was developed at a fraction of the cost ($6 million) of other large language models (LLMs).  This may have been a watershed moment for American AI players. Coupled with the recent US Administration change, public and private policy trends may begin to lean more toward faster acceleration of AI development, and away from caution.

Policy questions touch on the tension between data mining and data rights advocates, and this subject is likely to become increasingly contentious in 2025.

AI Adoption Will Accelerate in 2025

An estimated 70% of industries have already been disrupted by, and/or are making use of, AI. The pace of AI adoption is expected to move faster in 2025 than in 2024. Views of AI are changing from a cute novelty to a tool for enhancing productivity, to even an enabling, game-changing suite of technologies. Now is the time for companies to ask what programs have been on their technology roadmaps that were hard or impossible to do without AI. Some of those may now be possible. 

Only a few years ago, the discussions around AI were technology focused. Now they’re business focused. People are first asking what problems they need to solve, and then are employing AI to help solve them. Additionally, businesses are able to bring products to market within months with much smaller investments by building specialized models using already existing, more generalized foundational models.

However, AI adoption cannot happen unless company leaders accept the technology. Company leadership, including the board and C-suite management, need to be involved in promoting the use and development of AI technologies. Additionally, there remain many challenges in getting users to adopt AI technologies into workflows. Finding opportunities to integrate AI features within existing platforms has provided companies with much greater success than stand-alone products.

Data is the Differentiator

As AI models are becoming commoditized (i.e., foundational models are becoming cheaper and more accessible, and are performing more consistently) each company’s own data can be a major differentiator and competitive advantage. Individual companies have access to, and can control, their own data, and should think about how to exploit their data using AI. In some situations, generative AI may be useful in supplementing existing datasets with more training data. For example, virtual training scenarios have been used by companies like Boston Dynamics and Waymo to generate simulations that have been shown to be useful for real-world implementations of their technologies.

Infrastructure Optimization is Becoming More Important

As AI adoption continues to increase, energy consumption, memory, computing power, as well as the underlying hardware costs are all becoming bigger considerations. Accordingly, return on investment (ROI) is playing a greater role in decision-making as there are more viable AI models to choose from.

Distributed models -- which have components that “live” on mobile devices, and leverage, for example, an iPhone’s memory and power -- are beginning to be deployed. Companies are increasingly looking toward lighter weight options (i.e., models that use less memory as a function of the number of tokens or parameterized data points within a model, etc.), for example, by using techniques such as model distillation to leverage the training of foundational models in more specialized models.

AI Agents Reach Prime-Time

AI agents, which are effective at accomplishing simple tasks, are becoming more widespread as well. Rather than general purpose AIs that try to do everything, agents are being deployed for smaller, digestible tasks that are currently done by humans. Multi-agent systems have also been shown to work more effectively than trying to create a single agent that takes on too many tasks. 

Agents are currently being deployed for such tasks as password resets, inventory tracking, fraud detection, expense management, patient monitoring/diagnostics, call scheduling, subscription management, employee onboarding and off-boarding, database updating, etc. Adaptive agents that learn over time in order to improve their performance are also gaining popularity.

 PDF Version.