Oticon ConnectClip Wins 2018 Red Dot Award for Product Design
Commenting on the award win, Gary Rosenblum, president, Oticon, Inc said, “Oticon is honored to receive another prestigious Red Dot Award, this year for our new ConnectClip. This internationally recognized symbol of excellence is a testament not only to ConnectClip’s convenient, lifestyle-enhancing features, but also to the work that goes into the design and continued evolution of our Oticon Opn hearing aid, a 2017 Red Dot Award winner.”
The multi-functional ConnectClip is designed to turn Oticon Opn hearing aids into a high-quality wireless headset for clear, hands-free calls from mobile phones, including iPhone® and Android™ smartphones. Sound from the mobile phones is streamed directly to the hearing aids and ConnectClip’s directional microphones pick up the wearer’s voice. ConnectClip serves double duty as a remote/partner microphone, helping to provide improved intelligibility of the speaker wearing it, either at a distance (up to 65 feet), in very noisy environments or in a combination of the two. Opn wearers can also use ConnectClip as a remote control for their hearing aids.
Wearable Technology Award Win
Oticon also celebrates a win at the UK’s Wearable Technology and Digital Health Show Awards. Oticon Opn received the Innovation Award for wearable originality and advancement. The win reflects votes by a combined method of professional jury and public website vote.
Organizers at the Wearable Technology and Digital Health Show Awards commented on the win: ”The judges felt that the Oticon solution presented a revolutionary approach to hearing loss, and that its technology presented a real opportunity for users to interact with the growing number of smart devices in the home. A worthy winner.”
Learn more about the expanded Oticon Opn family, ConnectClip and entire range of wireless connectivity accessories at www.Oticon.com/Connectivity.
* Apple, the Apple logo, iPhone, iPad, iPod touch, and Apple Watch are trademarks of Apple Inc., registered in the U.S. and other countries. App Store is a service mark of Apple Inc. Android, Google Play, and the Google Play logo are trademarks of Google Inc.
Images: Oticon, Red Dot
Sonic Enchant Line Adds SoundClip-A to Stream Sounds in Stereo from Numerous Devices
Now, the small, ergonomically designed clip-on device delivers added benefit as a wireless remote/partner microphone for easier listening when the speaker is at a distance or in noisy environments where listening is difficult. SoundClip-A also enables remote volume control, program changes and call pick-up with just the press of a button.
“SoundClip-A’s wireless transmission of stereo sound from all Bluetooth 2.1 smartphones and devices adds the ‘wow’ of even more wireless convenience to the many ways Enchant makes everyday sounds better,” said Sonic President & COO Joseph A. Lugara in a press statement. “With Enchant, wireless connectivity is simple and stress-free thanks to Enchant’s Dual-Radio System that delivers fast ear-to-ear connection and employs 2.4 GHz technology.”
Simply Streaming. SoundClip-A allows patients to use Enchant hearing aids as a headset for mobile calls. Users stream stereo quality sound to both ears through their Enchant hearing aids from any Bluetooth 2.1 compatible device—including mobile phones, tablets, MP3 players, and more. The built-in microphones pick up the wearer’s voice and sound from the call which is streamed wirelessly to both ears for convenient, hands-free conversations.
When SoundClip-A is used as a wireless remote/partner microphone, the speaker simply clips on the lightweight device or keeps it nearby. The speaker’s voice can be heard more easily through the user’s Enchant hearing aids at a distance of up to 65 feet, according to the company. SoundClip-A also helps users enjoy video calls, webinars, and other audio sources for easy wireless listening in both ears.
For more information on SoundClip-A and the entire Enchant family, including Enchant100, Enchant80 and Enchant60 and popular styles including the miniRITE with ZPower, miniRITE T (with telecoil) and BTE 105, visit www.sonici.com.
Unitron Launches Moxi ALL Hearing Instrument
Published on February 22, 2018
Unitron announced the release of its latest hearing instrument, Moxi ALL.
Like all hearing instruments driven by the Tempus™ platform, Moxi ALL was designed around the company’s core philosophy of putting consumer needs at the forefront. The new hearing solution is designed to deliver “amazing sound quality,” according to Unitron, and advanced binaural performance features that help consumers hear their best in all of life’s conversations, including those on mobile phones.
After powering up overnight, a rechargeable battery is designed to help “keep them in the conversation” for up to 16 hours, including two hours of mobile phone use and five hours of TV streaming. Plus, consumers never have to worry if they forget to charge because they have the flexibility to swap in traditional batteries at any time.
A new way to deliver their most personalized solution
Consumers can take home Moxi ALL hearing instruments to try before they buy with FLEX:TRIAL™.
“Today’s consumers are not interested in one-size-fits-all. They want to know that the hearing instrument they select is personalized to their individual listening needs and preferences,” said Lilika Beck, vice president, Global Marketing, for Unitron. “This simple truth is driving our FLEX™ ecosystem—a collection of technologies, services, and programs designed to make the experience of buying and using a hearing instrument feel easy and empowering.”
As the latest addition to the FLEX ecosystem, Moxi ALL is proof of Unitron’s ongoing commitment to putting consumers at the center of its mission to provide the most personalized experience on the market when it comes to choosing hearing instruments.
The global roll-out of Moxi ALL begins February 23, 2018.
Visual Cues May Help Amplify Sound, University College London Researchers Find
Looking at someone’s lips is good for listening in noisy environments because it helps our brains amplify the sounds we’re hearing in time with what we’re seeing, finds a new University College London (UCL)-led study, the school announced on its website.
The researchers say their findings, published in Neuron, could be relevant to people with hearing aids or cochlear implants, as they tend to struggle hearing conversations in noisy places like a pub or restaurant.
The researchers found that visual information is integrated with auditory information at an earlier, more basic level than previously believed, independent of any conscious or attention-driven processes. When information from the eyes and ears is temporally coherent, the auditory cortex —the part of the brain responsible for interpreting what we hear—boosts the relevant sounds that tie in with what we’re looking at.
“While the auditory cortex is focused on processing sounds, roughly a quarter of its neurons respond to light—we helped discover that a decade ago, and we’ve been trying to figure out why that’s the case ever since,” said the study’s lead author, Dr Jennifer Bizley, UCL Ear Institute.
In a 2015 study, she and her team found that people can pick apart two different sounds more easily if the one they’re trying to focus on happens in time with a visual cue. For this latest study, the researchers presented the same auditory and visual stimuli to ferrets while recording their neural activity. When one of the auditory streams changed in amplitude in conjunction with changes in luminance of the visual stimulus, more of the neurons in the auditory cortex reacted to that sound.
“Looking at someone when they’re speaking doesn’t just help us hear because of our ability to recognize lip movements—we’ve shown it’s beneficial at a lower level than that, as the timing of the movements aligned with the timing of the sounds tells our auditory neurons which sounds to represent more strongly. If you’re trying to pick someone’s voice out of background noise, that could be really helpful,” said Bizley.
The researchers say their findings could help develop training strategies for people with hearing loss, as they have had early success in helping people tap into their brain’s ability to link up sound and sight. The findings could also help hearing aid and cochlear implant manufacturers develop smarter ways to amplify sound by linking it to the person’s gaze direction.
The paper adds to evidence that people who are having trouble hearing should get their eyes tested as well.
The study was led by Bizley and PhD student Huriye Atilgan, UCL Ear Institute, alongside researchers from UCL, the University of Rochester, and the University of Washington, and was funded by Wellcome, the Royal Society; the Biotechnology and Biological Sciences Research Council (BBSRC); Action on Hearing Loss; the National Institutes of Health (NIH), and the Hearing Health Foundation.
Original Paper: Atilgan H, Town SM, Wood KC, et al. Integration of visual information in auditory cortex promotes auditory scene analysis through multisensory binding. Neuron. 2018;97(3)[February]:640–655.e4. doi.org/10.1016/j.neuron.2017.12.03
Source: University College London, Neuron
Tinitus, help now on the horizon.
ADM Tronics Unlimited, Inc (OTCQB: ADMT), a technology-based developer and manufacturer of innovative technologies, has authorized its subsidiary, Aurex International Corporation (“AIC”) to begin advertising its new hearing protection product, Tinnitus Shield™ in Tinnitus Today, the official publication of the American Tinnitus Association, ADM announced.
Tinnitus Shield™ has been designed to protect against damaging sounds shown to cause tinnitus for individuals at risk of acquiring this condition, according to the company’s announcement. These include military, police, musicians, construction workers, and many other occupations subject to Noise-Induced Hearing Loss (NIHL).
The US Veterans Health Administration (VA) reports that tinnitus is the most prevalent combat-related disability affecting veterans, making it a high-priority healthcare issue facing the military and the VA.
While Tinnitus Shield™ has been specifically engineered to protect against the sounds which may cause tinnitus, AIC also plans to bring to market Aurex-3®, a patented, non-invasive therapy technology for the treatment and control of tinnitus.
Heading up AIC is CEO Mark Brenner, BSc, PhD, who draws upon years of experience serving the tinnitus market in the United Kingdom. Brenner brings with him the vision and resources necessary to set in motion the launching and distribution of Aurex-3 throughout the US and Europe. For these reasons, the company believes that under Brenner’s leadership and guidance, both AIC technologies can effectively penetrate this burgeoning market.
“The potential market for effective technologies that addresses the tinnitus marketplace is significant, considering the millions and millions of sufferers in the US and worldwide,” said Andre’ DiMino, president of ADMT.
Brenner commented, “AIC is now able to offer the full spectrum of support to the worldwide tinnitus community with its Tinnitus Shield, providing protection from noise-induced tinnitus, and the Aurex-3, as an active treatment and management system for those who have developed tinnitus. This is receiving great interest in the UK where we are actively working with The Tinnitus Clinic, a group of specialist tinnitus clinics. In the US we have active discussions with the American Tinnitus Association.”
Source: ADM Tronics Limited
With the Oticon Opn, users can expend less effort and recall more of what they encounter in a variety of complex listening environments. This open sound environment, powered by Oticon’s Velox platform, allows for greater speech comprehension, even in a challenging audiological setting with multiple speakers. With its OpenSound Navigator scanning the background 100 times per second, the Opn provides a clear and accurate sound experience.
Want to know what A.I. Hell is like?
How about interacting with a machine that repeatedly professes stupefaction when you just know it should know what you’re talking about?
I was excited when I heard last fall that Alphabet’s (GOOGL) Google’s new wireless ear pieces would perform a kind of “real time” translation of languages, as it was billed.
The ear pieces, “Pixel Buds,” which arrived in the mail the other day, turn out to be rather limited and somewhat frustrating.
They are in a sense just a new way to be annoyed by the shortcomings of Google’s A.I., Google Assistant.
The devices were unveiled at Google’s “Made By Google” hardware press conference in early October, where it debuted its new Pixel 2 smartphone, which I’ve positively reviewed in this space, and its new “mini” version of the “Google Home” appliance.
The Buds retail for $159 and can be ordered from Google’s online store.
Getting the things to pair with the Pixel 2 Plus that I use was problematic at first, but eventually succeeded after a series of attempts. I’ve noticed some similar issues with other Bluetooth-based devices, so I soldiered on and got it to work.
The sound quality and the fit is fine. The device is very lightweight, and the tether that connects the two ear pieces — they are not completely wireless like Apple’s (AAPL) AirPods — snakes around the back of one’s neck and is not uncomfortable.
The adjustable loops on each ear piece made the buds fit in my ears comfortably and stay there while I moved around. So, good job, Google, on industrial design.
Translating was another story.
One has to first install Google Translate, an application from Google of which I’m generally a big fan. Google supports translation in the app of 40 languages initially.
You invoke the app by putting your finger to the touch-sensitive spot on the right ear piece and saying something like, “Help me to speak Greek.” When you lift your finger, it invokes the Google Assistant on the Pixel 2 phone, who tells you in the default female voice that she will launch the Translate app.
Several times, however, the assistant told me she had no idea how to help. Sometimes she understood the request the second time around. It seemed to be hit or miss whether my command was understood or was valid. On a number of other occasions, she told me she couldn’t yet help with a particular language, even though the language was among the 40 offered. It seemed like more common languages, such as French and Spanish, elicited little protest. But asking for, say, the Georgian language to be translated stumped her, even though Georgian is in the set of supported tongues.
This dialogue with the machine to get my basic wishes fulfilled fell very far below the Turing Test:
Me: “Help me to speak Greek.”
Google: “Sorry, I’m not sure how to help with that yet.”
Me: “Help me to translate Greek.”
Google: “Sure, opening Google Translate.”
Me: “Help me to speak Georgian.”
Google: “Sorry, I’m not sure how to help with that.”
Me: “Help me to speak Georgian.”
Google: “Sorry, I don’t understand.”
Me: “Help me to speak Georgian.”
Google: “Sorry, I can’t help with that yet, but I’m always learning.”
Me: “Help me to translate Georgian.”
Google: “Sorry, I don’t know how to help with that.”
In answer to Thomas Friedman of The New York Times, who writes of a new era of “continuous learning” for humans, I would like all humans to tell their future robot masters, “Sorry, I can’t help with that yet, but I’m always learning.”
When it does work, the process of translating is a little underwhelming. The app launches, and you touch the right ear piece’s touch-sensitive area, and speak your phrase in your native language. As you’re speaking, Google Translate is turning that into transcribed text on the screen, in the foreign script. When you are fully done speaking, the entire phrase is played back in the foreign language through the phone’s speaker for your interlocutor to hear. That person can then press an icon in the Translate app and speak to you in their native tongue, and their phrase is played for you, translated, through your ear piece.
Even this doesn’t always go smoothly. Sometimes, after asking for help with one language, the Google Assistant would launch the Translate app and the app would be stuck on the previously used language. At other times, it was just fine. In the worst instances, the application would tell me it was having audio issues when I would tap the ear piece to speak, requiring me to kill the app and start again.
This is all rather cumbersome.
I went and tried Translate on my iPhone 7 Plus, using Apple’s AirPods, and had pretty much an equivalent experience, with somewhat less frustration. All I had to do was to double-tap the AirPods and say, “Launch Google Translate,” and then continue from there as normal. It’s slightly more limited in that the iPhone’s speaker is not playing back the translation for my interlocutor; that plays through the AirPods. But on the flip side, it’s actually a little easier to use the app because one can maintain a kind of “open mic” by pressing the microphone icon. The app will then continuously listen for whichever language is spoken, translating back and forth between the two constantly, rather than having to tell it at each turn who’s speaking.
All in all, then, Pixel Buds are just a fancy interface to Google Translate, which doesn’t seem to me revolutionary, and is rather less than what I’d hoped for, and very kludgy. It’s a shame, because I like Google Translate, and I like the whole premise of this enterprise.
At any rate, back to school, Google, keep learning.
Would you like a free hearing amplifier? Hearing aids for £15.99? Im sure you have all seen such advertisements in local and national newspapers suggesting that your hearing can be restored for a nominal charge or even for nothing at all, however these devices are not ‘hearing aids’. The cheap price might sound enticing, however personal sound amplification devices (PSAPS) could actually damage your hearing. As they just ‘amplify’ sounds with no consideration for prescription, resulting in a strong potential for over-amplification which can contribute to further hearing loss.
Like most hearing professionals I am frequently asked by new patients ‘whats the difference between this £15.99 amplification device and your hearing aids?’, besides the cost there are several important differences, which can seriously affect your hearing.
A PSAP will not distinguish between the types of sound you are listening to, there are no adjustments made for speech and noise. Therefore other than in simple listening situations like TV viewing, the device won’t help in background noise. The PSAP is basically an amplifier, which makes everything louder. In comparison, a hearing aid is equipped with a digital sound processing chip which analyses input sound and provides calculated gain that is comfortable and safe.
The PSAP is a generic fit instrument that provides the same fit and performance for every listener. Hearing aids are custom built and programmed, ensuring it is tailored individually for you. For the price conscious there are better options available that do not have to cost the earth, basic digital hearing aids can be acquired for reasonable money.
Properly fitted hearing aids by an hearing professional that you trust is the only safe way to improve and conserve your hearing levels. If you are concerned about your hearing or whether your current hearing devices are suitable then why not pop in to see us in Little Chalfont, Amersham and speak to us! Appointments are not necessary but advised so call us on 01494 765144.
This has been a subject that I have had strong views on. However, overtime I have changed my position slightly. But I have taken some research from several recent journals including https://www.ihsinfo.org/ihsv2/Ceus/pdf/2008_July_Aug_Sept_THP.pdf to help clarify my current stance.
Hearing aid fitting software has a built- in audiometer to obtain hearing levels with the hearing aid in the ear. This procedure is called in situ audiometry. “In situ” is a Latin phrase meaning “in place” . In the case of hearing instruments, it refers to measurements taken with the hearing aid in its natural location: correctly fitted in the ear. The procedure also accounts for the effects of the depth of the instrument in the ear canal, the effectiveness of the seal in the ear canal, the effects of venting, and the specific receiver in that instrument. When we are using the fitting software to set the target and perform the initial adjustments we rely exclusively on the hearing levels (HTLs) obtained during the audiologic evaluation. The fitting formulas used to set the target gain contain the proper algorithms to compute the gain targets based on the desired input levels and hearing instrument style. However, these algorithms are all based on average data. By including data obtained for your specific patient and his or her specific hearing instrument, we are adding a level of customization that patients expect from the sophisticated digital technology used today.
Once the HTLs are corrected for the hearing aid insertion effects, the hearing aid must be calibrated so that its gain response matches the gain targets. Real-ear measurements as a technique for objectively verifying the performance characteristics of a hearing aid are recommended as a best practice in hearing aid fittings (Valente, 2006). However, it is not widely used for reasons such as expense, time limitations, and the need for cumbersome equipment. As a result, about 60% of hearing professionals do not use real-ear measurements (Kirkwood, 2006). Differences in the acoustic characteristics of the ear canals are quite apparent and speak to the need for individual measures to add precision to the fitting rather than relying on average data. The target match will be inaccurate for the individual ear to the extent that the average RECD is different for the ear under test.
It is essential when achieving fitting success that the hearing aid prescription is verified. Without verification you do not know how the hearing aid is performing and therefore whether the patient is benefiting. I would estimate that at least 75% of private hearing centres still DO NOT verify their fittings, in comparison to 95% of NHS departments that DO verify their fittings. What this means is that ‘potentially’ most premium and advanced hearing aids fitted privately, ‘maybe’ under performing in comparison to more basic hearing aids fitted by the NHS. National hearing aid companies do not verify their fittings generally and often fit the aid to the manufacturers settings. When adjustments are made they are often made blindly without knowing the effect they have on output. The research indicates that verification is still needed to ensure (http://www.ncbi.nlm.nih.gov/pubmed/21376007) prescription is met and that in situ measures are not enough on their own, a stand alone verification device provides the best option. I used to feel that in situ measurements, would sufficiently tailor hearing aid gain to accommodate different ear canal properties. I naively used to make that assumption based on the patients first fit satisfaction and acceptance, as patients who were fitted to REM targets were often less satisfied than patients who were fitted using in situ. After reviewing many of my fittings using real ear measurements I have found that some manufacturers match to target better than others, but that there is still room for improvement in 7 out of 10 patients. Using data obtained directly from your patient will ensure the most accurate initial fitting and will help deliver high patient satisfaction. Therefore, I feel that a combination of both will result in a more precise fitting that is more representative of the individual rather than average data. If a centre is doing neither then you should really consider whether you should use them.
If you feel that your hearing aid is not performing properly or that it is not programmed correctly, then contact us on 01494 765144.