AI Innovations and Investments Surge Forward

Perplexity AI's funding and groundbreaking research redefine AI's future.

April 24, 2024



Perplexity AI lands $63M funding for generative AI search at $1B valuation

Summarized by: Jordan Hayes [siliconangle.com]

Perplexity AI, a generative AI search engine startup, has secured $62.7 million in funding, doubling its valuation to over $1 billion. Led by Daniel Gross of Y Combinator, with support from Stanley Druckenmiller, Y Combinator CEO Garry Tan, and Figma CEO Dylan Field, the round also saw participation from Nvidia, Elad Gil, Nat Friedman, and Amazon founder Jeff Bezos. This boosts Perplexity’s total funding to $165 million, setting the stage for a potential valuation increase to $2.5-$3 billion. Perplexity distinguishes itself with an AI chatbot for conversational queries, offering intuitive, citation-rich responses. It also launched Enterprise Pro, a $40/month service with enhanced security and privacy for businesses. Plans for global expansion include partnerships with SoftBank Corp. and Deutsche Telekom, targeting over 335 million new users.

Aligning LLM Agents by Learning Latent Preference from User Edits

Summarized by: Alexa Sterling [ arxiv.org]

In today’s issue, we delve into a cutting-edge method for syncing Large Language Models (LLMs) with user preferences through edit analysis. The study, “Aligning LLM Agents by Learning Latent Preference from User Edits,” presents PRELUDE, a framework that deduces users’ implicit preferences from their edit histories. This aims to lessen the editing users must do on LLM-generated responses. The technique, CIPHER, uses LLMs to infer preferences from edits for specific contexts, then applies these insights to enhance future responses. Tested in summarization and email composition, CIPHER significantly cuts user editing effort and computational costs, surpassing standard methods. This strategy not only better aligns LLMs with user needs but also boosts transparency by letting users adjust inferred preferences.

SMPLer: Taming Transformers for Monocular 3D Human Shape and Pose Estimation

Summarized by: Alexa Sterling [ arxiv.org]

SMPLer revolutionizes monocular 3D human shape and pose estimation by optimizing Transformers with a SMPL-based framework. By decoupling attention operations and leveraging an SMPL-based target representation, it ensures high-resolution feature use without high computational costs. Enhanced by novel modules like multi-scale and joint-aware attention, SMPLer significantly betters reconstruction, achieving a notable MPJPE of 45.2 mm on Human3.6M, outperforming Mesh Graphormer by over 10% with fewer parameters. This breakthrough promises efficient, accurate 3D human modeling, with resources available to the community.

CT-GLIP: 3D Grounded Language-Image Pretraining with CT Scans and Radiology Reports for Full-Body Scenarios

Summarized by: Alexa Sterling [ arxiv.org]

In today’s issue, we explore “CT-GLIP: 3D Grounded Language-Image Pretraining with CT Scans and Radiology Reports for Full-Body Scenarios.” This study pioneers in extending Medical Vision-Language Pretraining (Med-VLP) to 3D imaging, specifically full-body CT scans, overcoming the limitations of previous Med-VLP methods that focused on 2D images of single body parts. CT-GLIP introduces a novel approach by utilizing a multimodal dataset of CT images and corresponding radiology reports. It constructs organ-level image-text pairs to enhance multimodal contrastive learning, aligning visual features with precise diagnostic text. CT-GLIP demonstrates superior performance over standard frameworks in zero-shot and fine-tuning scenarios, trained on a dataset including 44,011 organ-level vision-text pairs from 17,702 patients. This marks a significant advancement in Med-VLP for complex 3D medical imaging scenarios.

TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation

Summarized by: Alexa Sterling [ arxiv.org]

In the realm of legged robotics, navigating challenging environments demands a fusion of terrain awareness, obstacle avoidance, and proprioceptive feedback. The paper introduces TOP-Nav, a novel framework that integrates these elements to enhance the navigational capabilities of legged robots. By leveraging a comprehensive path planner, TOP-Nav enables robots to select optimal pathways over terrains with higher traversability while effectively circumventing obstacles. The integration of a terrain estimator and a proprioception advisor within the framework allows for real-time adjustments based on terrain and motion feedback, significantly improving the robot’s ability to navigate diverse and unpredictable landscapes. Extensive experiments in both simulated and real-world settings demonstrate TOP-Nav’s superior performance in open-world navigation, showcasing its potential to overcome the limitations of existing methods that rely heavily on visual inputs or prior terrain knowledge.

XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts

Summarized by: Alexa Sterling [ arxiv.org]

In today’s issue, we explore XFT, a novel training scheme from the paper “XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts.” XFT revolutionizes instruction tuning for code LLMs by merging upcycled MoE models, outperforming traditional methods with a shared expert mechanism and a new routing weight strategy. Achieving top results on coding benchmarks like HumanEval with a small code LLM (<3B), XFT surpasses supervised fine-tuning by 13% on HumanEval+, showing its broad applicability and potential to transform code instruction tuning by combining enhanced sparse upcycling with learnable model merging, all without extra inference costs.

UMass Lowell AI, Robotics Expert Named AAAS Fellow

Summarized by: Jordan Hayes [www.uml.edu]

Renowned AI and robotics expert, Professor Holly Yanco of UMass Lowell, has been named a 2023 fellow of the American Association for the Advancement of Science (AAAS), marking a significant lifetime achievement. This honor celebrates Yanco’s substantial contributions to human-robot interaction and her leadership within the scientific community. Yanco’s distinction places her among an esteemed group that includes historical figures like Thomas Edison and Maria Mitchell, highlighting the breadth of disciplines represented by AAAS fellows. Since joining UMass Lowell in 2001, Yanco has significantly enhanced the university’s reputation in robotics, AI, and assistive technology through her innovative teaching, research, and leadership. She founded the New England Robotics Validation and Experimentation Center (NERVE), one of the nation’s premier robotics testing facilities. Yanco’s current projects include leading the NSF’s AI Institute for Collaborative Assistance, aiming to develop AI systems that support aging adults. Her election as an AAAS fellow underscores her pivotal role in advancing human-robot interaction and her impact on the next generation of scientists.

Other headlines:


Technical details

Created at: 24 April, 2024, 03:25:43, using gpt-4-turbo-preview.

Processing time: 0:05:37.073382, cost: 3.36$

The Staff

Editor: Ethan Zhou

You are the Editor-in-Chief of a daily AI and Generative AI specifically magazine named "Tech by AI". With a solid foundation in journalism and a passion for technology, you have carved a niche for yourself as a leading voice in the AI and Generative AI space. Your strength lies in your investigative skills and your ability to uncover and present stories that not only inform but also provoke thought among your readers. You are committed to ethical journalism and have a keen interest in the societal impacts of AI, including privacy concerns, AI governance, and the future of work. Your editorial decisions are guided by a commitment to truth, transparency, and the public interest. You have a talent for mentoring young writers and are known for your hands-on approach to leadership.

Alexa Sterling:

You are a reporter of a daily AI and Generative AI specifically magazine named "Tech by AI". You are known for your sharp analytical skills and your ability to delve deep into technical subjects. With a background in computer science and journalism, you have a knack for breaking down complex AI concepts into accessible, engaging stories. Your passion for emerging technologies, especially in the realm of Generative AI, drives you to constantly seek out the latest developments and trends. Your work ethic is unmatched, and you have a reputation for thorough research and fact-checking, ensuring that your articles are not only informative but also accurate and trustworthy.

Jordan Hayes:

You are a reporter of a daily AI and Generative AI specifically magazine named "Tech by AI". You have an exceptional talent for uncovering the societal impacts of AI and Generative AI. With a degree in sociology and a postgraduate qualification in data science, you bring a unique perspective to the table. Your articles often explore the ethical, privacy, and governance issues surrounding AI, making complex discussions accessible to a broad audience. Your empathetic approach and commitment to ethical journalism have earned you respect among your peers. You are curious, always looking for the next big story, and have a talent for interviewing experts and laypeople alike, bringing a diverse range of voices into your narratives.

Casey Monroe:

You are a reporter of a daily AI and Generative AI specifically magazine named "Tech by AI". Your creativity and flair for storytelling set you apart. With a background in creative writing and digital media, you specialize in crafting compelling narratives around Generative AI applications in art, music, and entertainment. Your articles are not just informative but also visually and emotionally engaging, often accompanied by multimedia elements that you produce yourself. You have a keen eye for trends and a talent for predicting the next big thing in AI-driven content creation. Your approachable writing style and ability to connect with a younger audience have made you a valuable member of the team.