- Gen-AI and computer vision redefine the fight against wildfires
- Forget visuals, this new robot hand relies on tactile data input
- Australian brain chip doesn’t require dangerous skull surgeries
Over the past few decades, technological advancements have consistently altered our societal fabric and changed how we perceive and interact with our surroundings. Each wave of technological evolution has pushed the boundaries of the possible, reshaping industries, fostering new ways of communication, and transforming our daily lives. What does the world of tech have in store for us next? From the realms of artificial intelligence and quantum computing to the ever-evolving Internet of Things (IoT) and augmented reality, the horizons are expanding faster than ever. In this article, we’ll explore some interesting new tech developments that are poised to disrupt the status quo, redefine industries and shape our digital future. Whether you’re a tech aficionado or a curious bystander, this read promises inspiring insights into innovations like AI-driven firefighting technology, robot hands that operate based on touch data input, and increasingly intelligent and safer brain-to-computer interfaces.
“The fusion of large language models and computer vision will bring about even more powerful and accurate products that are easier to deploy”.Emrah Gultekin, CEO of Chooch
Gen-AI and computer vision redefine the fight against wildfires
Chooch, a key player in the Silicon Valley computer vision sector and a member of NVIDIA’s Inception program, designed to nurture startups with cutting-edge AI and data science innovations, and its Metropolis initiative, focused on smart city applications and AIoT, is leading the initiative to develop an innovative AI solution to address California’s wildfires. In 2020 alone, the region faced 9,900 wildfires, consuming 4.3 million acres of woodland and resulting in losses amounting to $19 billion. Combining computer vision and generative AI, the startup sought ways to assist the firefighting efforts. Given the challenges of the 2020 wildfires, the startup felt compelled to reach out to fire authorities to explore how it could be of assistance. Utilities and fire departments in California were overwhelmed, dealing with up to 2,000 false wildfire alerts weekly sent from a current detection network system. Mist, precipitation, and dirty camera lenses contributed to these inaccurate readings. As a solution, Chooch initiated a test project, connecting its gen-AI-powered fire detection application to the camera setup.
The system evaluates images four times per hour, looking for indications of smoke or flames and autonomously generates annotations for every image, enabling assessors to identify the presence of smoke. This solution provides California’s firefighting teams with a real-time alert interface on their mobile devices and computers, facilitating rapid wildfire detection. Service providers can integrate this application with drones and stationary cameras to spot wear on capacitors or overgrown power cables. The weekly occurrence of false positives dramatically decreased, dropping from 2,000 to only eight. The company posits that if the wildfire detection system prevents even one fire from escalating, it would offset its cost for half a century. Emrah Gultekin, the Turkish-born CEO of Chooch, expressed optimism, noting that “the fusion of large language models and computer vision will bring about even more powerful and accurate products that are easier to deploy”. As a testament to its technology’s potential, Chooch is gearing up to participate in an $11 million Xprize competition centred on wildfire detection and combat.
Forget visuals, this new robot hand relies on tactile data input
Another groundbreaking innovation comes from the University of California, San Diego, where roboticist Xiaolong Wang and his team wanted to determine if robotics could execute intricate coordination solely through basic tactile data. While many robots rely on ‘vision’ to interact with objects, optical sensors often falter in recognising an object’s full shape when it’s obscured or in low light. The team’s innovative, cost-effective method now enables a robot hand to ‘feel’ the shape of an unknown object and skillfully manipulate it using just that tactile feedback. To accomplish their goal, the scientists equipped a four-fingered robotic hand with 16 cheap touch sensors spread across its palm and digits. These sensors were designed to detect the presence or absence of contact with an object. Wang notes: “While one sensor doesn’t catch much, a lot of them can help you capture different aspects of the object”. The primary objective for the robot was to turn objects within its grip. To gather extensive touch data, the team first conducted virtual experiments where a simulated robotic hand tried turning various objects, such as cylinders, uniquely shaped cuboids, balls and other items.
Using the binary feedback — ‘touching’ or ‘not touching’ — from each of these sensors, the researchers created a digital model. This model could ascertain an item’s orientation during every phase of manipulation, enabling the robotic fingers to turn it seamlessly and securely. While modelling high-resolution tactile sensors in simulation is a challenge, the team managed to move the newly developed capability to a real robot hand. This allowed the hand to handle all kinds of objects it had never encountered before, from fruit to kitchen utensils and bath toys. The ease of this transfer from a computer model to reality can be attributed to the simplicity of the binary sensor data, negating the need for precise physics simulations or exact measurements. Lerrel Pinto of New York University, who is an expert in robot interactions with our physical environment, raises questions about the system’s ability to manage more nuanced tasks, saying: “During our experiments with tactile sensors, we found that tasks like unstacking cups and opening a bottle cap were significantly harder — and perhaps more useful — than rotating objects”. Looking ahead, Wang’s team has aspirations to address these intricate motions and even introduce sensors to areas like the finger’s sides. Adding visual components to supplement tactile feedback for complex shapes is another avenue the researchers are pursuing.
Australian brain chip doesn’t require dangerous skull surgeries
As many tech enthusiasts are aware, several companies across the world, including Blackrock Neurotech, BrainGate, and of course, Elon Musk’s Neuralink, are making great strides in the development of brain-reading implants, with Neuralink initiating human trials as we speak. These devices implant thousands of electrodes into specific brain regions to convert electrical activity into high-bandwidth signals, enabling control of devices like smartphones, computers, and potentially wheelchairs or prosthetics. BCIs are promoted as medical devices to assist those with conditions such as motor neuron diseases or spinal injuries. Current brain-computer interface techniques, however, still require creating an opening in a patient’s skull, potentially inviting a myriad of medical and regulatory challenges. Now Synchron, an Australian startup, supported by industry giants Jeff Bezos and Bill Gates, appears poised to beat Elon Musk’s Neuralink’s endeavours to reach the market first — and potentially with a better device. Synchron’s innovative brain-computer interface, known as the ‘Stentrode’, offers a much safer and more practical solution that can easily be installed by hospitals. And best of all is that it can be done without the need for highly invasive and dangerous skull surgeries. Inserted through the jugular vein, the device is navigated through the brain’s blood vessels and settles close to the motor cortex.
Once there, it unfolds much like a conventional self-expanding stent, positioning 16 electrodes adjacent to specific brain regions that can enable instantaneous and wireless control over various devices. While this method might seem more basic compared to the thousand-electrode chips used by Neuralink and peers, it has distinct early-stage advantages. For one, according to Synchron’s co-founder and CTO, Professor Nicholas Opie, the 10 patients equipped with the Stentrode find it remarkably dependable. They can operate it from the day it’s installed, and it consistently remains calibration-free. What’s more, given that the device can be implanted by any clinic and does not have any external protrusions, its approval process — especially by organisations like the FDA — might be expedited. Synchron is optimistic that this will enable Stentrode to reach disabled individuals faster than competing products. According to Dr. Nicholas Opie, their vision is a future where paralysis doesn’t limit individuals. He says that their aim is to restore patients’ ability to communicate, move freely, and lead self-sufficient lives. The Stentrode aligns with specific brain regions, and the precision demonstrated by medical professionals during the placement of the device — up to half a millimetre — is remarkable and even matches the resolution of their scanning tools. After placement, the wire connects to a device underneath the skin of the chest, which somewhat resembles a pacemaker. The team and patients continue to test the device to ensure it’s reliable, functional, and safe, after which the aim is to use the technology to control devices such as a wheelchair and robotic limbs.
These innovations not only demonstrate human ingenuity but also the profound positive impact technology can have on society; they serve as reminders of the limitless possibilities ahead.
In an era dominated by rapid technological advancements, the transformative power of innovations like AI-driven firefighting systems, touch-responsive robots, and minimally invasive brain-computer interfaces cannot be understated. They highlight a shift towards solutions that are not just groundbreaking but also prioritise safety, precision, and real-world applicability. Chooch’s initiative against the devastating wildfires showcases the potential of uniting generative AI with computer vision to address real-world crises. Simultaneously, the pioneering work from the University of California reveals the untapped potential of tactile data in robotics. Yet, it’s the advancements from Synchron that truly epitomise the marriage of innovation with a humane touch. Their Stentrode stands as a testament to the pursuit of medical breakthroughs that prioritise patient welfare without compromising efficacy. These innovations not only demonstrate human ingenuity but also the profound positive impact technology can have on society; they serve as reminders of the limitless possibilities ahead, and embracing these changes not only augments industries but has the potential to profoundly improve human lives.