At Media Jobs we like to bring you the most innovative companies. In this interview we will learn about a disability innovation from a new company that could help disabled people regain their voice. About 1.5% of people in the Western World have some form of communications limitations due to medical conditions including Motor Neurone Disease, Cerebral Palsy, Stroke, Brain Damage and Autism. Current “talk” solutions utilize body movements however they do not truly enable the individuals to actually speak.
Talkitt is unique in that it is an inexpensive smart phone app that will use the users own voice to communicate.
Imagine having something to say but no way to communicate it. Talkitt’s Chief Executive (CEO) Danny Weissberg came up with the idea in Israel back in 2012 after his grandmother had a stroke that severely impaired her speech.
We spoke with Talkitt’s US head of Business Development, Sara Smolley to learn more about this yet to be released product.
Roy: | This is Roy Weissman from MediaJobs.com. We’re talking with Sara Smolley from Voiceitt.
|
You have a product called Talkitt. What is Talkitt? What was the reason for creating it? What problems are we solving.
|
|
Sara: | Talkitt is a speech recognition technology that translates the speech of people with speech disabilities in real time.
|
Roy: | Has no one been doing that? I don’t know if you’ve read about the Sumner Redstone?
|
Sara: | No.
|
Roy: | Have you read about the Sumner Redstone lawsuit? Recently, Sumner Redstone, the owner of Viacom, is 90-something years old and has a problem speaking. There’s actually a nurse that’s translating what he’s saying. Maybe your software could help. Could you give me a sense of why you guys created this product?
|
Sara: | Absolutely. Each person on our team has a personal interest. A family member or friend that has a communication disorder. Actually, it’s really more than you think. About 1.5% of the population has some sort of communication disorder. Communication disorders is a rising trend among children in the United States. If you think of aging populations in the developed world in the US, Europe, South Korea … People are getting older and along with that comes diseases related to age. Degenerative diseases, ALS, MS, Parkinson’s disease. Unfortunately, many times the progress of the disease comes along with extremely severe speech impairment. Especially with somebody that doesn’t have a cognitive disability, it hurts them the most that they could appear to have a cognitive disability because they can’t communicate in a clear and coherent fashion. That’s a problem that we aim to solve. We really believe that by giving people their voices back can really change their lives.
|
Roy: | Is each person with a speech impediment have the same issues? Or are different people with different diseases … How do you match up different speech impediments? Aren’t they all unique to the individual?
|
Sara: | Absolutely, that’s the core of what we’re doing. We see each person with their unique way of speaking. These people with a unique language. As standard voice recognition, the way it works with standard speech. What we’re making is a technology that works for people with non-standard speech. With a unique pronunciation and a unique language.
|
Roy: | When was this company founded? Who founded it?
|
Sara: | The founder of the company is our CEO, Danny Weissberg. He co-founded it with our now CTO, Stas Yunkin. Both of them with a technology background. In the case of Danny, his grandmother suddenly had a stroke and overnight lost her ability to communicate clearly. He often tells this story. It resonates with a lot of us who know that communication, or lack thereof, it has a tremendous impact on your interactions and relationship with a person. Stas, our CTO, is an expert in signal processing and machine learning and is leading our algorithm team, where each of our developers has this rare combination of skills. Linguistics and algorithm development, machine learning.
|
Roy: | When did you guys start this product?
|
Sara: | The company was founded in 2012, but our product Talkitt, we started a couple years ago. I’m new to the team, just a few months.
|
Roy: | How many customers do you have for the product?
|
Sara: | The product’s not available. It’s not available for sale yet. We are in beta version right now. We’re testing with selected partners in Israel, Europe and here in the US. Our US subsidiary is based out of Buffalo, NY. We’re working with disabilities organizations and clinics, possibly hospitals as well, to test the product with medical professionals and speech language pathologists to get the input that we need to further develop Talkitt until it’s ready for market. Probably the end of this year.
|
Roy: | You’re anticipating that this would be sold directly to consumers or be an institutional product?
|
Sara: | The first rendition of Talkitt will be as a mobile application available directly to end users through the app store for a subscription fee of about $20 a month, which is very, almost laughably, competitive in the industry.
|
Roy: | In other words, they would have an app and if they talked on the phone, it would convert it into more understandable language?
|
Sara: | The way it works is very similar to what you would think of in speech recognition. Yes, the person will be holding their device, either an iPad or an iPhone, speak into it and the machine will translate their speech. It is important to note that the first version of the product will be based on a limited vocabulary. People with very severe speech impairments where a dictionary of 10 word where the machine can understand and translate about 10–20 words is a life-changer for them. Later versions will allow greater freedom in allowing the person to communicate with more sentences and for mildly impaired speakers.
|
Roy: | Do you have any sense of the percentage of the population that would need this or benefit from this?
|
Sara: | Percentage of the general population is about 1.5% percent.
|
Roy: | Are you talking about the US or worldwide?
|
Sara: | Worldwide. We have market research relating to the exact numbers for the United States and extrapolated to Europe. It’s a bigger number than you would think.
|
Roy: | Is there anyone else doing this currently?
|
Sara: | There is many communication devices. It’s called augmentative assistive communication devices. AAC. Amazing technologies from eye tracking to tracking different movements in the body, but there’s no other communication device, that we know about, that allows the person to communicate using their own voice.
|
Roy: | The other ones are having them communicate how?
|
Sara: | There might be an eye tracking device where the person will use different eye movements to signify words and letters. There’s also communication boards which could be used in an iPad where the person would point or touch different images. That would be another way to communicate. All of this is solving the communication problem by bypassing the voice altogether.
|
Roy: | You’re just using the voice, which is a first?
|
Sara: | Exactly.
|
Roy: | You’re not anticipating even having a product until the end of 2016?
|
Sara: | To the general public? The end of this year, yes.
|
Roy: | At that point, it’s going to be limited with only 10 words?
|
Sara: | Yes.
|
Roy: | Do you have any sense of when they think they’ll have a more robust vocabulary?
|
Sara: | An expanded vocabulary and greater freedom in the use of the app, will be a second version in 2017. It’s important to note that an important feature of Talkitt is that as people use it, the machine continues to adapt to the person’s unique pronunciation. The machine actually gets more efficient and robust as the person uses it. We’re also creating a voice recording database. As the person uses Talkitt, we’re storing the voices. We believe that this voice recording database and this data doesn’t really exist yet. We see it as a potentially monetizable vehicle for big data, and, potentially, a research tool for voice disordered speech, more generally.
|
Roy: | This is basically artificial intelligence. It’s learning how people speak. How does it know it’s using the right words?
|
Sara: | The words of the user? The user will train the app. There is a short calibration phase. Let’s say you make a dictionary starting with 10 words. You’ll train the app. It takes a few minutes. The user will repeat the word a couple times, 3 or 4 times. That’s how the machine initially calibrates.
|
Roy: | In essence, a lot of the software would develop uniquely to the individual. The individual would be training the app to learn more and more words. Is that correct?
|
Sara: | Yeah, that’s right.
|
Roy: | That sounds exciting. What made you decide to come work for this company?
|
Sara: | It’s a longer story for why I decided to start to work for Voiceitt. I grew up in Miami. I spent the last 5 years in Asia. I was working for startups. My first exposure to technology was in South Korea. It was through Koreans that I learned about the innovation scene in Tel Aviv. I went over there about a year ago and started freelancing in social technology. Then, I found Voiceitt. It was around the time that the company won a business competition called 43North. We won half a million dollars and opened up an office. Now, we’re in Buffalo.
|
Roy: | That sounds exciting. Are you guys hiring at this point?
|
Sara: | We are looking for algorithm developers.
|
Roy: | If you’re an algorithm developer, you want to hear from them.
|
Sara: | Sure.
|
Roy: | Is that in the US or overseas?
|
Sara: | In any of our locations, we need the best of algorithm developers. People interested in speech research and social enterprise.
|
Roy: | That sounds great. Is there anything else you want to share that I didn’t ask about Talkitt.
|
Sara: | Just that we’re a company with a mission. We’re at the intersections of technology and medicine. It’s a real exciting field to be in.
|