Biotech firm Colossal BioSciences has secured $200 million in a recent funding round to continue its work on "de-extinction," a process aimed at bringing back extinct species such as the woolly mammoth. Colossal is a US-based company with offices in Dallas and Boston. It has made significant progress in the area of de-extinction, a concept that echoes the plot of Michael Crichton’s novel Jurassic Park. The company's investors include game developer Richard Garriott and diversified holding company TWG Global, which has interests in technology, AI, financial services, and media. Since its launch in September 2021, Colossal has raised a total of $435 million in funding, bringing its valuation to $10.2 billion. The latest funding will be used to further develop its genetic engineering technologies and create innovative software, wetware, and hardware solutions. These technologies have applications not only in de-extinction but also in species preservation and human healthcare. Colossal currently employs over 170 scientists and collaborates with labs in Boston, Dallas, and Melbourne, Australia. The company also sponsors more than 40 full-time postdoctoral scholars and research programs in 16 partner labs at prestigious universities worldwide. The company's scientific advisory board includes over 95 leading scientists in fields such as genomics, ancient DNA, ecology, conservation, developmental biology, and paleontology. Together, they are working to solve complex problems in biology, including mapping genotypes to traits and behaviors, understanding developmental pathways to phenotypes, and developing new tools for genome engineering. In 2024, the company established the Colossal Foundation, a non-profit organization focused on overseeing the application of Colossal-developed science and technology innovations. The foundation currently supports 48 conservation partners and their global initiatives. The first step in every de-extinction project involves recovering and analyzing preserved genetic material and using that data to identify each species’ core genomic components. Colossal has assembled a team of global experts in ancient DNA research to push advances in this area. The company's scientists have achieved significant breakthroughs in the de-extinction process for their three flagship species: the mammoth, thylacine, and dodo. For example, the mammoth project has generated new genomic resources, made breakthroughs in cell biology and genome engineering, and explored the ecological impact of de-extinction. Colossal is not only focused on de-extinction but also on species preservation. By 2050, it is projected that over 50% of the world's animal species may be extinct. Colossal's toolkit of software, wetware, and hardware solutions provides new, scalable approaches to this existential threat and biodiversity crisis. The company's work on mammoth restoration, for example, has advanced reproductive and genetic technologies that can help preserve endangered elephant species. Similarly, the dodo program is pioneering avian genetic tools that will benefit threatened bird species worldwide. Through the Colossal Foundation and its partnerships with leading conservation organizations, Colossal is transforming these scientific advances into practical solutions that can help protect and restore vulnerable species across multiple taxonomic families. In addition to TWG Global, Colossal's strategic investors include USIT, Animal Capital, Breyer Capital, At One Ventures, In-Q-Tel, BOLD Capital, Peak 6, and Draper Associates, among others.
The latest trend in generative artificial intelligence (AI) is Agentic AI, which refers to AI systems that perform actions on behalf of the user. However, an even more innovative concept, known as ambient agents, is beginning to take shape. This approach, introduced by a leading Agentic AI company, involves AI systems running in the background, constantly monitoring event streams and acting when necessary based on pre-set instructions and user intent. While the term "ambient agents" is relatively new, the idea of ambient intelligence, where AI is always 'listening', is not. For instance, Amazon's Alexa personal assistant technology is often associated with ambient intelligence. The goal of ambient agents is to automate repetitive tasks and enhance user capabilities by running persistently in the background. This allows users to focus on higher-level tasks while the agents handle routine work. To demonstrate the potential of ambient agents, the company has developed initial use cases for email and social media management. The technology uses various open-source solutions and is designed to help users manage and respond to emails and social media notifications when necessary. The concept of ambient agents was born out of a need to solve a common problem: email overload. The company's CEO began developing an ambient agent to manage his own emails, which categorizes and handles the triage process automatically. Over time, the CEO was able to refine and improve the agent's capabilities through regular use and addressing pain points. The system is complex and involves multiple components and language models. The company has also designed a new user interface, the agent inbox, specifically for interacting with ambient agents. This system displays all ongoing communications between users and agents and makes it easy to track pending actions. The company's technology is primarily a tool for developers, which can now also be used to build and deploy ambient agents. Developers can use the open-source technology to create an ambient agent, with additional tools available to simplify the process. The company also offers a commercial platform that provides observability and evaluation for agents, helping developers monitor and evaluate their performance. The CEO is optimistic about the adoption of ambient agents by developers in the future. While he believes that true artificial general intelligence (AGI) will likely come from improvements in reasoning models, he sees great value in making better use of existing models through the concept of ambient agents. The company has already released an open-source version of the email assistant and plans to release a new social media ambient agent and an open-source version of the agent inbox in the near future. As the field of generative AI continues to evolve, these developments offer exciting new possibilities for enhancing productivity and efficiency.
Microsoft has unveiled Copilot Chat, a reimagined version of its AI chat experience designed for businesses. This product is part of Microsoft's ongoing efforts to position Copilot as the primary interface for AI. The tech giant has already launched several versions of the GPT-4o-powered assistant for both personal and business users. Starting today, businesses can use Copilot Chat to explore many of the features of the more comprehensive Microsoft 365 Copilot, which costs $30 per user per month. The chat experience is free, but task automation capabilities, a significant feature, will operate on a consumption-based model. Microsoft aims to provide a taste of the paid version of Copilot to its commercial customers. The company hopes that by offering features like AI agents, Microsoft 365 users, including customer service reps, marketing leads, and frontline technicians, will integrate Copilot into their daily routines and eventually opt for the paid plan. This move comes as no surprise, given some enterprises have found the Microsoft 365 Copilot rollout less than ideal, citing it as expensive and complicated to implement due to security issues. Meanwhile, Google is advancing with its AI for work, Gemini for Workspace, positioning it as an affordable and easily accessible alternative. Microsoft 365 Copilot Chat will maintain its chat interface, allowing users to input queries and receive AI-generated responses. The underlying model, OpenAI's GPT-4o, will provide web-based information, enabling users to conduct market research or prepare strategic documents. The AI even supports file uploads, allowing users to request summaries, analyses, or recommendations from documents, and create images for purposes such as social media marketing. The highlight, however, is the support for AI agents. IT administrators can use Copilot Studio to create domain-specific agents that can assist employees via Microsoft 365 Copilot Chat. These agents can automate repetitive tasks and provide relevant information, using data from the web and work data via Microsoft Graph or third-party graph connectors. However, access to these agents will not be entirely free. They will be available on a consumption-based model, with usage determined by the number of messages used by an organization. Messages can be purchased through Copilot Studio in Microsoft Azure for $0.01/message, or via pre-paid message packs at $200 for 25,000 messages/month. Microsoft's move aims to monetize Microsoft 365 users with basic AI needs while creating a potential path to convert them into paying customers. This is also a response to Google's push with the Gemini assistant, which is available for free within its Workspace apps for Workspace Business and Enterprise customers. Microsoft stands out by offering usage-based AI capabilities, enabling businesses to create custom agents for task automation, a feature currently lacking in Gemini. Ultimately, the choice between the two depends on the ecosystem you're aligned with and your specific needs. Google's approach provides easy access to Gemini within essential business apps but lacks task automation capabilities. On the other hand, Microsoft 365 offers web-based chat and task automation features (on a pay-as-you-go model), but requires a higher investment to unlock AI functionality within its work apps.
Microsoft has enhanced its AutoGen orchestration framework, aiming to increase the adaptability of the AI agents it helps create and provide organizations with greater control. The latest version, AutoGen v0.4, has been designed to strengthen AI agents' robustness and address customer-identified issues related to architectural limitations. The initial release of AutoGen sparked widespread interest in AI technologies. However, users found themselves grappling with architectural constraints, inefficient APIs, and limited debugging and intervention functionality. To address these concerns, Microsoft has focused on enhancing observability and control, fostering multi-agent collaboration, and developing reusable components in AutoGen v0.4. The updated framework is more modular and extensible, emphasizing scalability and distributed agent networks. It introduces asynchronous messaging and cross-language support, as well as observability, debugging, and a variety of built-in and community extensions. Asynchronous messaging enables agents to support event-driven and request-interaction patterns. Furthermore, the framework's modularity allows developers to add plug-in components and create long-lasting agents, facilitating the design of more complex and distributed agent networks. To simplify the process of working with multi-agent teams and advanced model clients, AutoGen's extension module has been improved. It also enables open-source developers to manage their extensions more effectively. Observability is addressed by introducing built-in metric tracking, messaging tracing, and debugging tools, allowing users to monitor agent interactions more closely. The updates also allow for interoperability between agents using different coding languages - initially Python and .NET, with more languages to be supported soon. Microsoft has restructured AutoGen’s framework to clarify responsibilities across the framework, tools, and applications. It now consists of three layers: the core layer, which provides the foundational building blocks for an event-driven system; AgentChat, a task-driven, high-level API that includes group chat, code execution, and pre-built agents; and first-party extensions that interface with integrations like Azure's code executor and OpenAI's model client. In addition to updating its framework, Microsoft has also upgraded tools built around AutoGen, such as AutoGen Studio. This low-code interface for rapidly prototyping agents has been rebuilt on the AutoGen v4.0 AgentChat API, enabling real-time agent updates, conversation pausing, agent redirection, agent team design with a drag-and-drop interface, custom agent importing, and interactive feedback. Microsoft initially launched AutoGen in October 2023 to streamline agent communication. It was one of the first AI agent orchestration frameworks released, along with LangChain and LlamaIndex, before AI agents became the popular topic they are today. The tech giant has since released other agent systems, such as Magentic-One, and has deployed one of the largest AI agent ecosystems via its Copilot Studio platform. Microsoft is not alone in this field, however. Competitors like Salesforce and ServiceNow have launched their own agent systems, while AWS has added more support for creating multi-agent systems to its Bedrock platform. The use of generative AI continues to evolve, with companies constantly seeking new ways to maximize their return on investment in this technology.
A renowned former Google engineer and the creator of the popular Python deep learning framework Keras, along with the co-founder of Zapier, have joined forces to launch Ndea, a new artificial intelligence (AI) research and science lab. Their vision is to merge intuitive pattern recognition, powered by deep learning, with formal reasoning through a method they've termed "guided program synthesis." The founders believe this combination will enable AI systems to adapt and innovate beyond the current task-specific applications, paving the way to artificial general intelligence (AGI). AGI is broadly defined in the AI community as machine intelligence that can outperform humans in most economically valuable and cognitive tasks. The founders have not yet disclosed whether they have received external financial backing for this venture or are funding it themselves. This announcement follows a recent trend of tech entrepreneurs launching AI-focused startups. While existing deep learning systems have achieved remarkable feats, the founders argue these systems are fundamentally limited by their dependence on large datasets and their inefficient adaptability to new tasks. They propose that program synthesis is the key to overcoming these limitations. Unlike traditional deep learning, which interpolates between data points, program synthesis seeks discrete programs that explain data, allowing for greater generalization with fewer data points. Ndea's mission extends beyond the creation of AGI. The lab aims to serve as a "factory for rapid scientific advancement," capable of tackling both known and unknown challenges. From addressing current frontiers like autonomous vehicles and sustainable energy to accelerating new discoveries, Ndea envisions itself as a catalyst for scientific progress. The founders believe their research direction has the potential to unlock breakthroughs and redefine the boundaries of human knowledge. They aspire to develop an AI that can learn as efficiently as humans and continue to improve over time without bottlenecks. Program synthesis, a cornerstone of Ndea's research, is still a relatively nascent field. However, its potential is being increasingly recognized by frontier AI labs, even if many consider it a minor component of the requirements for AGI. In contrast, Ndea views program synthesis as equally significant as deep learning and central to their approach. Ndea is actively seeking a globally distributed team of researchers and engineers to build what it describes as the world's most "talent-dense program synthesis team." The company operates fully remotely and is seeking candidates with strong technical expertise, particularly in translating mathematical concepts into code. The founders bring extensive experience to Ndea. At Google, the former engineer worked on core research into deep learning and AI systems, identifying limitations of existing models and opportunities for improvement. His contributions include the widely used ARC-AGI benchmark, a metric for measuring progress toward AGI. He is also the author of the book "Deep Learning with Python" and has been recognized among Time’s "100 Most Influential People in AI." The co-founder of Zapier led engineering and product development at the world's largest AI automation company, where he pioneered best practices for globally distributed teams. Both founders are also co-founders of the ARC Prize Foundation, a nonprofit organization focused on advancing open AGI research. Ndea, named after the Greek concepts ennoia (intuitive understanding) and dianoia (logical reasoning), aims to operationalize AGI to compress centuries of scientific progress into decades or even years. Despite the challenges of pursuing AGI, the founders remain optimistic about their approach. They view AGI as the key to addressing humanity's most pressing challenges and uncovering new opportunities for discovery.
The financial services sector is grappling with advanced identity-based cyberattacks that threaten to steal billions of dollars, disrupt transactions, and erode years of built trust. These attacks are becoming increasingly complex, with cybercriminals exploiting gaps in the industry's identity security measures. They employ a range of tactics, from leveraging Lightweight Directory Access Protocol (LDAP) to using adversarial AI techniques to perpetrate synthetic fraud. Financial institutions are under significant threat, with exposure to synthetic identity fraud exceeding $3.1 billion and growing by 14.2% in the past year. Additionally, the use of deepfakes has surged by 3000% and is expected to increase by another 50-60% in 2024. Other threats such as smishing texts, multi-factor authentication (MFA) fatigue, and deepfake impersonations are also on the rise. As one of the largest retail mortgage lenders in the U.S., Rate Companies is a prime target for cybercriminals due to the billions of sensitive transactions processed daily. To combat these threats, the company has adopted a comprehensive AI-based strategy, focusing on protecting customer, employee, and partner identities. Rate Companies understands the importance of AI threat modeling in protecting customers' identities and securing transactions. The company has implemented identity-based anomaly detection and real-time threat response mechanisms, adopting a zero-trust framework that revolves around identity and continuous verification. A "never trust, always verify" approach is applied to identity validation, with least privileged access defined and every transaction and workflow monitored in real time. Recognizing the need for swift detection and response, the company follows the "1-10-60" SOC model: 1 minute to detect, 10 minutes to triage, and 60 minutes to contain threats. To accommodate the cyclical nature of the mortgage industry, Rate Companies has adopted CrowdStrike’s adaptable licensing model, Falcon Flex, which allows for easy scaling of cybersecurity measures. Rate Companies has also faced the challenge of securing every regional and satellite office, monitoring identities and their relative privileges, and setting time limits on resource access. AI threat modeling is utilized to define least privileged access and monitor every transaction and workflow in real time. The company's experiences have yielded several lessons in the use of AI to thwart sophisticated identity attacks. For instance, they found that their previous vendor was generating more noise than actionable alerts. Switching to CrowdStrike’s Falcon Complete Next-Gen managed detection and response (MDR) resulted in more legitimate threats being detected. As the company continues to grow, it requires cloud security that can adapt to changing market conditions. This includes real-time visibility and automated detection of misconfigurations across cloud assets, as well as integration across diverse cloud environments. For AI threat modeling to be successful, endpoint detection and response (EDR), identity protection, cloud security, and additional modules all need to be under one console. This consolidation makes management and incident response more efficient, providing a clear, real-time view of all assets and automatically flagging misconfigurations, vulnerabilities, and unauthorized access. In the fight against cybercrime, accuracy, precision, and speed are critical. The attack surface is not just the infrastructure, but also the time available to respond. It's a race against the clock, and the right tools and strategies can make all the difference.
In the rapidly evolving world of AI video creation, Luma AI has made a significant impact with its Dream Machine platform, which was launched last summer. This platform, which is designed to generate AI videos, has been continually updated and improved to keep pace with the numerous other models that have been released by competitors both in the U.S. and China. The Dream Machine platform has recently been enhanced with new features such as still image generation and brainstorming boards. Additionally, an iOS app has also been released. The enhanced model, according to Luma AI's CEO, offers fast and natural motion and physics, and has been trained with ten times more computational power than the original model. This has significantly increased the success rate of creating production-ready generations and made video storytelling more accessible. The Dream Machine platform is available as a free tier with 720-pixel generations, with a cap on the number of generations each month. Paid plans start at $6.99 per month, offering 1080p visuals, and range up to an enterprise plan priced at $1,672.92 per year. Currently, the platform's Ray2 model is limited to text-to-video functionality. This allows users to input descriptions that are then converted into five- or ten-second video clips. Despite the high demand from new users that can sometimes slow the generation process, the model can create new videos in seconds. The versatility of the model is demonstrated in examples shared by Luma and early testers. These include lifelike and fluid motion videos such as a man running through an Antarctic snowstorm surrounded by explosions, and a ballerina performing on an ice floe in the Arctic. The model can also create realistic versions of surreal ideas, such as a giraffe surfing. Feedback from AI video creators who have tried the new model has been largely positive, praising the improved cinematography, lighting, and realism. However, tests have shown that more complex prompts can sometimes result in unnatural and glitchy results. In the future, Luma plans to add image-to-video, video-to-video, and editing capabilities to the Ray2 model, further expanding its creative potential. To celebrate the launch of Ray2, Luma Labs is hosting the Ray2 Awards, offering creators the chance to win up to $7,000 in prizes. Luma Labs has also launched an affiliate program, offering participants the opportunity to earn commissions by promoting its tools. For those interested in staying informed about the latest developments in generative AI, VB Daily offers insights into regulatory shifts and practical deployments in this exciting field.
Botika, an innovative startup specializing in AI-generated fashion models, has successfully raised $8 million in seed funding and launched its new mobile app for iOS devices. The company aims to revolutionize the fashion industry by offering AI-generated models that enhance the visual appeal of online clothing brands. The latest enhancements to Botika's generative AI platform, coupled with the mobile app launch, provide fashion brands with a practical tool to create high-quality, on-model images for e-commerce. The $8 million funding round, co-led by Stardom and Secret Chord Ventures with participation from Seedcamp, will fuel further product innovation and support Botika's expansion in the fashion industry. Over the past year, Botika has experienced significant growth, multiplying its revenue by nine times and its customer base by eleven times. The newly acquired capital will enable Botika to offer a cost-effective solution for online clothing brands to improve their visual content, boost sales, and reduce production costs. Botika's unique AI technology transforms basic product images into professional, studio-quality shots. It creates AI-generated models and backgrounds that mirror each brand's aesthetic, providing a seamless transition from ordinary product images to captivating visuals. The high production cost of professional photoshoots for e-commerce, which can range from tens of thousands to hundreds of thousands of dollars, is a major pain point for fashion brands. Botika addresses this issue by providing a manageable cost and timeline for creating high-quality, on-brand images. The new Botika mobile app puts the power of this technology into users' hands, enabling fashion brands to create striking, on-model product photos anytime, anywhere. The app also includes support for flat lays and packshots, extending Botika's capabilities beyond on-model photography. Designed with the needs of mobile users on e-commerce platforms in mind, the app facilitates professional-quality photo creation directly from a smartphone. Brands can shoot, edit, and sync images to their online stores, reducing costs and time-to-market while maintaining quality. Botika's CEO, Eran Dagan, emphasizes the importance of engaging and accurate product imagery in converting browsing shoppers into buyers. He believes the new mobile app will empower brands to bring their creative visions to life and compete effectively in the increasingly competitive online fashion market. The company, which currently employs 15 people, plans to build on the momentum generated by its recent funding and product innovations throughout 2025. Botika's founders have spent the last five years developing computer vision AI technology, initially intended for gaming, but pivoted towards fashion based on positive feedback and the potential for significant impact in the industry. Botika is clear that its platform is not intended to replace human creativity but rather to enhance it by providing access to high-quality visuals that may otherwise be unaffordable. The company reports that many brands use a combination of real and AI-generated models to serve different needs, enabling them to scale their efforts, generate visuals faster, and make their content generation processes more efficient.
As the implementation of autonomous artificial intelligence (AI) systems expands, the demand for safety and security measures also increases. In response to this growing need, Nvidia has unveiled a series of enhancements to its NeMo Guardrails technology, specifically developed to cater to the needs of autonomous AI. Guardrails essentially provide a policy and control framework for large language models (LLMs) to prevent unauthorized and unintended outputs. This concept has been widely adopted by various vendors in recent years, including Amazon Web Services (AWS). Nvidia's latest updates to NeMo Guardrails aim to simplify the deployment process for organizations and offer more detailed control methods. Now available as Nvidia Inference Microservices (NIMs) optimized for Nvidia’s GPUs, these guardrails are specifically designed for autonomous AI deployments, rather than only for individual LLMs. Nvidia's VP for enterprise AI models, software, and services, emphasized the shift in approach, stating, "It’s not just about guard-railing a model anymore. It’s about guard railing and a total system.” By 2025, the use of autonomous AI is predicted to be a prevalent trend. Despite its numerous advantages, autonomous AI also introduces new challenges, particularly in terms of security, data privacy, and governance requirements. These challenges can present significant obstacles to deployment. To address these issues, Nvidia has introduced three new NeMo Guardrails NIM services for content safety, topic control, and jailbreak detection. These services are designed to simplify the complexities of securing autonomous AI systems, which often involve multiple interconnected agents and models. The VP provided an example of a retail customer service scenario involving multiple agents: a reasoning LLM, a retrieval-augmented generation (RAG) agent, and a customer service assistant agent. She noted that each interaction or LLM requires its own set of guardrails, adding to the complexity of the system. However, a key goal of NeMo Guardrails NIMs is to simplify this process for enterprises. Performance is another major concern for businesses deploying autonomous AI. Adding guardrails can introduce latency, which can affect the overall performance of the AI system. To mitigate this, the latest NeMo Guardrail NIMs have been fine-tuned and optimized. Early testing by Nvidia indicates that these guardrails can provide 50% better protection, adding only about half a second of latency. The Nvidia NeMo Guardrails NIMs are available under the Nvidia AI enterprise license, which is currently priced at $4,500 per GPU per year. Developers can also access them for free under an open source license. For those seeking to stay updated on the latest developments in generative AI, including regulatory changes and practical deployments, VB Daily provides comprehensive coverage and insights to help maximize your return on investment.
A pioneering artificial intelligence startup has launched a specialized search engine, HRGPT, designed to revolutionize workforce management. The company believes that HR departments will be the next significant area for AI integration in businesses. The AI-powered search engine will enable companies to cross-reference their internal HR data with employment laws and regulations. The startup, which transitioned out of stealth mode last year, has recently secured a $5 million strategic investment in its latest funding round, raising its total seed funding to $32 million. This reflects growing interest in AI applications tailored to specific industries. The company's CEO predicts a future where all HR departments will utilize AI agents to manage various tasks across the HR spectrum. The startup is already competing with established HR software providers by focusing exclusively on AI-powered solutions. Multinational companies are already using its platform, including a global sporting goods company that utilizes the technology for employee onboarding across its 17 worldwide offices. HRGPT stands out from general-purpose AI chatbots by combining real-time web search with access to internal company data and specialized HR knowledge. The system can handle tasks from generating employment agreements to tracking time-off requests and managing international expense reimbursements. The platform also integrates with a leading professional services network for employment law expertise. Despite the public attention garnered by consumer AI tools, the CEO believes that the next wave of AI adoption will come from businesses. He foresees a two to three-year period where businesses will increasingly integrate AI into their organizations, with HR being a prime candidate due to its numerous applicable use cases. The company's vision is to automate complex HR processes like payroll management and employee analytics, aiming to build a billion-dollar company with less than 50 employees by leveraging AI extensively in their operations. However, the startup faces considerable challenges, including deciding which features to develop next due to high customer demand. The company also needs to ensure accuracy and compliance in its automated HR functions, particularly for sensitive tasks like employment agreements and international payments. The success of this startup could indicate whether specialized AI tools can successfully compete against established enterprise software providers who are also incorporating AI capabilities into their existing products. Currently, early customers seem satisfied, with the startup reporting that its AI agents perform tasks hourly across its customer base. For those seeking to stay abreast of the latest developments in generative AI, the company provides insights into regulatory shifts and practical deployments, enabling users to maximize their return on investment.