Hans Blaster
Call Me Al
- Mar 11, 2017
- 24,619
- 18,006
- 56
- Country
- United States
- Gender
- Male
- Faith
- Atheist
- Marital Status
- Private
- Politics
- US-Democrat
OK.I think that's where there's some misunderstanding with regards to how it's leveraged.
The AI workflow that are processing it have already had detailed instructions injected in to the organization specific customized model.
Can we drop all of this talk about chat bots? Even the chat bots have sub-bots to do things like image generation.Meaning, it's not just a guy uploading massive CSV files and asking a chat bot a generic question like "tell me stuff about this data"
What are the "AI" methods being used. I'm no expert, but I have been exposed to several via talks.
We'll their not more dumb than I'd thought before I read your post.There's a human upstream injecting the model with detailed business rules, KPIs, etc... so it knows how to make the decisions.
Bad entry location/ error detection is a problem I've dealt with. I can see how trained models could be useful for that.The AI model can just get its arms around the massive data interpret where to join things together, can identify bad records and all that stuff quicker than a SQL report writer ever could.
They have software products too? (Definitely going on my 2029 anti-trust break up list.)People are still thinking of AI in terms of people asking chatbots things, and robots moving stuff around in a warehouse. That's really only a small piece of the pie in terms of AI usage and computing power usage. The bigger footprint is stuff happening behind the scenes that nobody ever sees.
Amazon's Contact Lens and Lex are quite proficient at it.
That doesn't seem like a lot. Even at your low end for long calls (30 minutes) a call worker can do 15 of those a day, or 2000 person-days per month. That's 100 full time operators. (Less if 24/7, somewhat more with some part-time employees.)We have a client (that I won't name) that has a pretty large contact center ecosystem, and is handling about 30,000 calls per month. (ranging from quick 5 minute phone calls, up to calls spanning a half hour or more)
For a workforce of ~100 FTE answering calls, 20% for Quality verification sounds high.Their internal Quality/Verifications team only consists of about 20 people.
That's what random sampling is for.Even if that team was doubled in size, there's no way they'd be able to listen to and analyze a large percentage of the calls. So before, they were relegated to randomly auditing wav recordings of phone calls, combined with pulling up ones where there was a customer who called in to complain about the rep they spoke with.
I'm still skeptical of "sentiment ratings". I find them inaccurate, to useless when asked directly about my sentiment about something, like the performance of a call operator.And it's not a blind trust situations. It's not as if orgs are spinning these things up without scale up testing.
The proof of concept phase for those situations is almost always send a few small batches of calls up to it, and then compare what the tools rated it as compared to what the human rated it as. In over 95% of the calls, the review outcomes were in the same ballpark, and for the 5% that weren't, some of those differences were actually the human making a mistake or not catching something that happened on the call.
Upvote
0