Do we need better natural language text search tools to mine the data in transcriptions?
That’s part of our formula for success. We couple our speech recognition with our own natural language processing approach. As a result, we can do some things that most people can’t. That will be part of how these systems will work in the future. This isn’t a replacement for structured data entry, but it can help considerably. The Hidden Markov model uses proximity, frequency, and context to interpret speech. Couldn’t you build a keyword table at the same time for searching the document later? We’re doing something very close to that. We take the unstructured dictation and turn it into a normalized document. No matter how someone dictates, all documents of the same type look the same. We’ll find in the dictation where the person said “pre-op diagnosis” or “admitting diagnosis,” no matter what they call it, and we’ll put it in the right section of the transcription. We’re already restructuring the dictation in that way. You need the transcription people to QA the final result. Speech re
Related Questions
- What tools does Verizon Wireless offer to help me monitor and manage my data, text and voice usage? What charges, if any, apply to each of these tools?
- What is the difference between Keyword Search, Advanced Search and Natural Language Search?
- Do we need better natural language text search tools to mine the data in transcriptions?