Frequently Answered Questions
ACT AI is "Applied Claims Technology AI". ACT AI can be thought of as a discrete set of processes and procedures being applied to your narrative documentation by a computer. It is reading documentation similar to the way people would read it - only faster and better. And, more consistently - both across a population of claim adjuster notes in one period AND across multiple periods.
Our technology complements any and all business systems because client’s sentence data is pulled from wherever it is stored and processed through our technology platform.
The best analogy is to think of the way an Excel spreadsheet complements business systems. Just as numerical data is pulled out of business systems and evaluated in Excel, sentence data is pulled out and evaluated in our technology.
The critical difference is CONTEXT.
Text processing is focused on words and the identification of clusters, themes and patterns that relate to words and phrases. This is accomplished through frequencies and statistical patterns that emerge from using computational horsepower.
Sentence data processing is focused on identifying granular, factual information in the context of the entire sentence, all of the sentences in a document and all of the documents within a particular entity e.g. claim, policy or patient.
The context enables us to distinguish if an event or state has happened, might happen or has not happened.
Natural Language Processing (NLP) has become a very broadly used term and means a lot of different things to different people. In the broader market, NLP can mean voice recognition, speech interpretation or pattern identification. Each of those functions may have different capabilities depending on the specific purpose and use case.
Our goal is to focus on the business solution that you're receiving as a client without wasting your time describing how we're doing it or putting a label on it. Simply, we have a variety of tools and strategies for identifying and extracting Events & Activities in context (happened, did not happen or may happen) as needed to answer business questions for our clients.
First, Big Data touches all types of data including unstructured data (e.g. sentence data) and structured data (e.g. name, address, telephone number, zip code, and so forth). Second, the objective of touching the data is usually a broader effort to identify themes, trends, patterns and possibilities that exist across the data.
On the other hand, SDRefinery is focused on finding specific, granular facts that relate to the needs of a business user.
Consider, for example, Google as the ultimate Big Data engine. If a user enters the phrase “8th grade grammar 55442 [zip code]” topic into the Search box, Google scours hundreds of millions of records to return just over 4,000 records for a user to review.
SDRefinery’s expertise is reading and interpreting the information in the 4,000 records whereas Big Data’s expertise is finding the 4,000 records.
Machine Learning can broadly be described as the process of using the computer to make decisions. Comparatively, SDRefinery’s process is focused on using the computer to improve the productivity of people.
Machine Learning requires users to define a “training set” of specimen documents that contain desired snippets of information. From this training set, the computer processes millions of records for the purpose of executing pre-defined tasks.
SDRefinery enables users to be more effective and efficient in creating comprehensive and accurate training sets used in the machine learning process.
Businesses faces two enormous constraints when working with coded data: first, codes are always outdated and are based on historical activity and/or expected activity. As a business, you care about what is happening here and now.
Second, codes significantly limit the knowledge about any topic. Thus, competitors working with sentence data will have far greater insight into any given issue and they’ll have it much faster than a business relying on coded data.
Yes, we work with scanned and image data. We work with sentence data wherever it is captured and stored. As expected, a poorly scanned or imaged document will produce a significant amount of noise that needs to be removed before effective grammar parsing can occur.
Our focus is on sentence data that is represented by typed, text characters. If we encountered a client solution requiring this capability, we have the ability to license a third-party tool and plug it into our technology platform.