VR Headset Gives The Partially Sighted Better Sight
A virtual reality headset has restored sight to people who are legally blind. While it didn’t cure the physical cause of their blindness, the device let people with severe macular degeneration resume activities like reading and gardening – tasks they previously found impossible.
Macular degeneration is a common, age-related condition. It affects around 11 million people in the US and around 600,000 people in the UK. Damage to blood vessels causes the central part of the eye, called the macular, to degrade. This leaves people with a blind spot in the centre of their vision, and can make those with the condition legally blind.
“You can still see with your periphery, but it’s difficult or impossible to recognise people, to read, to perform everyday activities,” says Bob Massof at Johns Hopkins University in Maryland.
The new system, called IrisVision, uses VR to make the most of peripheral vision. The user puts on a VR headset that holds a Samsung Galaxy phone. It records the person’s surroundings and displays them in real time, but the user can magnify the image as many times as they need for their peripheral vision to become clear. Doing so also helps to effectively reduce or eliminate their blind spot.
Google Glass Helps Autistic Children Understand The World
Many children diagnosed with autism find it hard to decipher other people’s facial expressions. An interactive system that uses Google Glass may help them out.
“I can see the difference this will have in people’s lives in a significant, immediate and meaningful way,” says Donji Cullenbine, whose child took part in the study. She said enrolling on the study was “one of the best choices that I have ever made for him”.
Using technology such as Google Glass to support individuals with autism has been a promising area of research. The idea is they can provide a game-like environment to practise life skills without being overwhelming. But it has never been used outside the lab before.
Dennis Wall at the Stanford University School of Medicine and his colleagues have now demonstrated that the effects work outside of the lab. They gave 14 children with autism a system called Superpower Glass to try at home for an average of 10 weeks each.
Superpower Glass is a Google Glass headset combined with an Android app. The system detects expressions in real-time and then displays emojis that match the facial expression in the corner of the Google Glass, as well as speaking via the headphones. The idea is that the children find it easier to spot the emotion when it is displayed in this way.
To find out if the setup had any effect, the team asked the children’s parents to assess their children before, during and after wearing the glasses using the Social Responsiveness Scale, which is often used in autism research.
The children’s scores on this questionnaire decreased by 7.38 points during the study, indicating less severe symptoms. None of the participants’ scores increased. Six of the 14 participants had large enough declines in their scores to move down one step in the severity of their autism classification.
The results should be interpreted with caution since the study didn’t have a control arm, says Wall. However, the findings are promising, he says
First Encouraging Trial Of A Drug For Alzheimer’s
The long, discouraging quest for a medication that works to treat Alzheimer’s reached a potentially promising milestone last month. For the first time in a large clinical trial, a drug was able to both reduce the plaques in the brains of patients and slow the progression of dementia.
More extensive trials will be needed to know if the new drug is truly effective, but if the results, presented at the Alzheimer’s Association International Conference in Chicago, are borne out, the drug may be the first to successfully attack both the brain changes and the symptoms of Alzheimer’s.
“This trial shows you can both clear plaque and change cognition,” said Dr. Reisa Sperling, director of the Center for Alzheimer Research and Treatment at Brigham and Women’s Hospital in Boston, who was not involved in the study. “I don’t know that we’ve hit a home run yet. It’s important not to over-conclude on the data. But as a proof of concept, I feel like this is very encouraging.”
Diet Mirror Builds 3-D Model Of Your Body To Help You Diet
San Francisco’s Naked Labs has started shipping its 3D-scanning smart mirror with rotating scale. Connected to a mobile app, the Naked scales build 3D models of your body, then track them through your hypothetical healthy transformation.
The Naked system consists of three pieces. The first is a smart mirror, with three embedded Intel RealSense depth sensors capable of scanning objects in three dimensions. There’s also a built-in alignment laser to help you get the perfect pose each time, and a laptop-grade processor to take the roughly 4 GB of data each scan yields and crunch it down to a usable model.
Part two is a rotating weight scale that works on hard or carpeted floors. It’s got a built-in battery and charges off the mirror, which plugs into the wall. You stand on it, strike a pose, and then quietly rotate for 15 seconds as the mirror scrutinizes every inch of your body. Well, technically every tenth of an inch or so, making it quite a high-resolution scan.
In the process of building a 3D model to send via Wi-Fi/Bluetooth to part three – the smartphone app – the Naked system takes circumference measurements of your neck, shoulders, chest, upper arms, waist, stomach, hips, upper thighs, mid thighs and calves, and uses them to build measurements of your body fat percentage, fat mass, lean mass, and weight to provide a terrific set of metrics with which to assess your overall health and progress toward your fitness goals.
The Smart Suit That Measures Your Body For A Perfect Clothes Fit
One problem plagues the business of selling clothes online—predicting how a garment will fit without trying it on. Behemoths such as ASOS, a British internet platform that sells its own and others’ apparel, try to overcome this by allowing people to buy several sizes to try on at home and return items free of charge—at huge cost to them. Enter the body-measurement suit from Start Today,a Japanese firm that runs the “Zozotown” platform in Japan on which clothing companies from around the world sell their wares, as well as its own private label, Zozo.
In the past three months Start Today has distributed to just over 1m Japanese customers, free of charge, its “Zozosuit”, a skin-tight, full-body suit covered in around 350 fiducial markers, small objects that can be used as a point of reference for measurements. Shoppers slip on the suit and slowly rotate as their smartphone takes photos.
The firm uses the images to create a 3D scan of their body, which it can use to offer a range of customised services. Among these are made-to-measure business suits for men from its Zozo brand, which are selling strongly, and jeans and T-shirts that fit most snugly from tens of thousands of pre-cut patterns, also from Zozo.
At the most basic level, when customers choose an item from one of the 6,400 brands listed on Zozotown—the core of Start Today’s business—the platform uses the Zozosuit data to recommend the right size.
AI Systems Are Equal To Human Doctors In Some Diagnoses
In 2017, artificial intelligence scientist Sebastian Thrun and colleagues at Stanford University demonstrated that a “deep learning” algorithm was capable of diagnosing potentially cancerous skin lesions as accurately as a board-certified dermatologist.
Now Google has reported that DeepMind, has developed an AI which can successfully detect more than 50 types of eye disease just by looking at 3D retinal scans.
DeepMind has just published the results of joint research with Moorfields Eye Hospital, a renowned centre for treating eye conditions in London, in Nature Medicine.
The company said its AI was as accurate as expert clinicians when it came to detecting diseases, such as diabetic eye disease and macular degeneration. It could also recommend the best course of action for patients and suggest which needed urgent care.
Experts say medical images, like photographs, X-rays, and MRIs, are a nearly perfect match for the strengths of deep-learning software, which has in the past few years led to breakthroughs in recognizing faces and objects in pictures.
Companies are already in pursuit. Verily, Alphabet’s life sciences arm, joined forces with Nikon last December to develop algorithms to detect causes of blindness in diabetics. The field of radiology, meanwhile, has been dubbed the “Silicon Valley of medicine” because of the number of detailed images it generates.
Could You Use An Extra Pair Of Hands?
Yamen Saraiji has four arms, and two of them are giving him a hug.
The limbs embracing Saraiji are long, lanky, and robotic, and they’re connected to a backpack he’s wearing. The arms are actually controlled remotely by another person, who’s wearing an Oculus Rift VR headset, with which they can see the world from Saraiji’s perspective (cameras linked to the backpack ensure a good view), and wield handheld controllers to direct the non-human arms and connected hands.
After the hug, the robotic arms release Saraiji. Then the right hand gives him a high five, and Saraiji smiles.
Saraiji, an assistant professor at Tokyo-based Keio University’s Graduate School of Media Design, led the development of this robotic-arms-on-a-backpack project, called Fusion, to explore how people may be able to work together to control (or augment) one person’s body. Though some of the actions Saraiji shows via video chat from his lab in Japan seem trivial, he thinks the device could be useful for things like physical therapy and instructing people from afar.
Magic Leap Finally Releases VR Headset
Magic Leap has built a gadget that is real, and cool, and can mix three-dimensional virtual imageswith reality better than any other augmented or mixed-reality headset—whatever you want to call it—that has so far been reviewed.
The big question now is: what will people do with this thing?
The company hopes developers and other creative types will start coming up with answers shortly. Because two weeks ago Magic Leap started selling its long-awaited first gadget, a pair of black, tinted, fly-eyed goggles called Magic Leap One.
This first headset is not for everyone. You’ll first have to register as a developer—the company hopes a community of developers will emerge to build apps for the headset, as they do for smartphones—and shell out $2,295 (for comparison, Microsoft’s HoloLens headset, also still aimed at developers, costs $3,000 or $5,000). You also have to be at least 18, and able to have it delivered to you in one of several US cities where it will be initially available, such as New York or Seattle.
If none of these hurdles stops you, you’ll receive the headset, a wearable computer that connects to it, and a one-handed controller. A rechargeable battery gives the whole system enough juice to work for up to three hours at a time.
Would You Like Your Suitcase To Follow You Around?
If you travel a lot, it’s possible that you may get tired of dragging your suitcase around. Wouldn’t things be easier if it could just follow along beside you? Well, according to Beijing-based start-up ForwardX Robotics, that’s just what its motorized Ovis suitcaseis able to do.
Among other things, the Ovis is equipped with two brushless electric motors (one driving each rear wheel), multiple video cameras, and a Qualcomm Snapdragon chip running artificial intelligence-based computer vision algorithms.
This combo reportedly allows it to visually track its user, and stay at their right-hand side as they stride through the airport or down the street at speeds of up to 10 km/h (6 mph). It’s also claimed to be capable of identifying obstacles such as other people, automatically steering itself around them.
If users wish, they can take manual control of the Ovis at any time and pull it along like a normal suitcase, simply by grabbing its handle. This can also be done if its removable 96-Wh lithium-ion battery runs out.