• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • CF has always been a site that welcomes people from different backgrounds and beliefs to participate in discussion and even debate. That is the nature of its ministry. In view of recent events emotions are running very high. We need to remind people of some basic principles in debating on this site. We need to be civil when we express differences in opinion. No personal attacks. Avoid you, your statements. Don't characterize an entire political party with comparisons to Fascism or Communism or other extreme movements that committed atrocities. CF is not the place for broad brush or blanket statements about groups and political parties. Put the broad brushes and blankets away when you come to CF, better yet, put them in the incinerator. Debate had no place for them. We need to remember that people that commit acts of violence represent themselves or a small extreme faction.
  • We hope the site problems here are now solved, however, if you still have any issues, please start a ticket in Contact Us

  • The rule regarding AI content has been updated. The rule now rules as follows:

    Be sure to credit AI when copying and pasting AI sources. Link to the site of the AI search, just like linking to an article.

Red state residents lead growing rebellion against data centers that Trump loves

Hans Blaster

Call Me Al
Mar 11, 2017
24,619
18,006
56
USA
✟465,682.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
I think that's where there's some misunderstanding with regards to how it's leveraged.

The AI workflow that are processing it have already had detailed instructions injected in to the organization specific customized model.
OK.
Meaning, it's not just a guy uploading massive CSV files and asking a chat bot a generic question like "tell me stuff about this data"
Can we drop all of this talk about chat bots? Even the chat bots have sub-bots to do things like image generation.

What are the "AI" methods being used. I'm no expert, but I have been exposed to several via talks.
There's a human upstream injecting the model with detailed business rules, KPIs, etc... so it knows how to make the decisions.
We'll their not more dumb than I'd thought before I read your post.
The AI model can just get its arms around the massive data interpret where to join things together, can identify bad records and all that stuff quicker than a SQL report writer ever could.
Bad entry location/ error detection is a problem I've dealt with. I can see how trained models could be useful for that.
People are still thinking of AI in terms of people asking chatbots things, and robots moving stuff around in a warehouse. That's really only a small piece of the pie in terms of AI usage and computing power usage. The bigger footprint is stuff happening behind the scenes that nobody ever sees.

Amazon's Contact Lens and Lex are quite proficient at it.
They have software products too? (Definitely going on my 2029 anti-trust break up list.)
We have a client (that I won't name) that has a pretty large contact center ecosystem, and is handling about 30,000 calls per month. (ranging from quick 5 minute phone calls, up to calls spanning a half hour or more)
That doesn't seem like a lot. Even at your low end for long calls (30 minutes) a call worker can do 15 of those a day, or 2000 person-days per month. That's 100 full time operators. (Less if 24/7, somewhat more with some part-time employees.)
Their internal Quality/Verifications team only consists of about 20 people.
For a workforce of ~100 FTE answering calls, 20% for Quality verification sounds high.
Even if that team was doubled in size, there's no way they'd be able to listen to and analyze a large percentage of the calls. So before, they were relegated to randomly auditing wav recordings of phone calls, combined with pulling up ones where there was a customer who called in to complain about the rep they spoke with.
That's what random sampling is for.
And it's not a blind trust situations. It's not as if orgs are spinning these things up without scale up testing.

The proof of concept phase for those situations is almost always send a few small batches of calls up to it, and then compare what the tools rated it as compared to what the human rated it as. In over 95% of the calls, the review outcomes were in the same ballpark, and for the 5% that weren't, some of those differences were actually the human making a mistake or not catching something that happened on the call.
I'm still skeptical of "sentiment ratings". I find them inaccurate, to useless when asked directly about my sentiment about something, like the performance of a call operator.
 
Upvote 0

ThatRobGuy

Part of the IT crowd
Site Supporter
Sep 4, 2005
30,116
17,588
Here
✟1,586,348.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
US-Others
Can we drop all of this talk about chat bots? Even the chat bots have sub-bots to do things like image generation.

What are the "AI" methods being used. I'm no expert, but I have been exposed to several via talks.
Depends on the models/protocols being used.

Did you have any specific example scenarios in mind?
We'll their not more dumb than I'd thought before I read your post.
That doesn't seem like a lot. Even at your low end for long calls (30 minutes) a call worker can do 15 of those a day, or 2000 person-days per month. That's 100 full time operators. (Less if 24/7, somewhat more with some part-time employees.)
For a workforce of ~100 FTE answering calls, 20% for Quality verification sounds high.
It's not so much the answering of the calls, it's being able to quality-score and analyze them after the fact without having to rely on random sampling (which has numerous flaws in that sort of environment)
I'm still skeptical of "sentiment ratings". I find them inaccurate, to useless when asked directly about my sentiment about something, like the performance of a call operator.
Based on the clients I've helped implement it for, they've seen noticeable improvements across all key metrics.

is it perfect? Obviously not.

But neither is 20 people picking a few dozen random calls to listen to every day.

I can provide a tangible example.

For certain kinds of call center work, certain verbal disclosures are an absolute must (depending on the type of call they're handling), if they don't do it correctly and read it 100% verbatim, and a person decided to file a complaint, there are fines associated with that.

If a person is neglecting to do that or having issues doing it correctly, that's a red flag that more coaching and training is needed.


Their QA team caught very few of those instances happening with their random sampling approach. The AI tools leveraged were able to not only give the precise number (by agent), but the times of day and types of call it was more prevalent on. (something that would be well beyond the expectations and skillset of a $18/hour QA/review staff member)

There are obviously other examples... but given what we know about the C-suite, they hate spending money, and they certainly don't like to spend money on something tanking their results, and they often implement these things in a more incremental approach (nobody is cutting over to these tools like the flip of the switch). So the fact that they're adopting it, sticking with it, and seeking to expand it once they see the outcomes, that means it's working.
 
Upvote 0

Hans Blaster

Call Me Al
Mar 11, 2017
24,619
18,006
56
USA
✟465,682.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
I've been holding back my response to this post about the nature of "AI" data centers until my other conversations in the thread were "clear", but I'm going to do it now anyway.
I assumed we would be talking about the new data centres being built and not historical ones (Which partially explains the NIMBY phenom).
All the new data centres being built are much, MUCH larger than older ones.
New ones are built:

Traditional Data Centers vs. AI-Ready Data Centers: A New Era of Infrastructure
I read this article (and the ones posted by @Tuur and I have one significant question. First, these types of "data centers" have been typical at the top end of high performance computing (HPC) for 10-15 years: high power density and water cooling.

What stuck out from that article (and the others) was the term "HPC" to describe "AI data centers". For those who've been involved in HPC for a long time, we seen a lot of "pretenders" using "HPC" or "supercomputer" to describe their clusters incorrectly. The enduring characteristic of a supercomputer (HPC) is the fast internal network or interconnect that allows the whole thing to work together on a single task, from the Cray-1 of the past to the HPE Cray EX-series machines that are the "top 3" and 6 of the top 10 supercomputers on the planet.

These articles talk about bandwidth to storage (I/O) systems in these types of "AI" data centers, but it is not made clear if these systems have fast internal interconnects. If they are not needed (and I don't know what the work load that would require them could be), they are very expensive. (Typically the most expensive part of a supercomputer and are built with custom ASICs.)

Do "AI" training loads really involve large amounts of inter-node communication? I don't see evidence of that in my search attempts. What kind of interaction am I talking about? Let's look at an example derived from my own work in numerical simulation on supercomputers.

Start by spreading the problem across a couple dozen compute cabinets (racks, if you like), with a couple thousand nodes, and about 50,000 individual processes (MPI ranks). Each rank contains a part of the simulation and at the end of a step it exchanges data with some of the other ranks. A total of a few hundred GB is transmitted. Now, that's not too much but it is the timing requirements that matter. It needs to push that data in a burst lasting a few 10 ms (0.01 s) because the compute stage that follows only lasts a couple 100 ms (0.1s) before the communication need to be done again. This cycle repeats a couple million times to complete the simulation. (Not all in one run, output is saved every few minutes for restart and analysis. Individual runs run several hours to a full day.) At the same time, a couple similarly sized jobs are running on other nodes with their own demanding communication patterns sharing the same network hardware.

Does "AI training" really need internal communication hardware of that kind? Do they even have it?

Clearly the new centres are not really comparable to the old ones. You are welcome to continue to trot out impressive "systems related" numbers but it doesn't really matter if the localized impact does NOT really reflect older trends.

And what communities getting these centres are expecting are SIGNIFICANT drains on resources.

Can you accept that this is a reasonable concern?


In my province the town of Olds Alberta is getting a data centre...9000 people. You think their electricity and water usage is NOT going to be impacted by a data centre? Pffffffft.
The burst of "data center" construction, mostly tied to "AI", is large enough that people are noticing it.

The "x" AI facility in Memphis is a notorious polluter from onsite gas turbine generators (in excess of their permitted usage) with power demands about 30 times larger than the 10-30 MW usage of those top 10 supercomputers.
 
Upvote 0

Hans Blaster

Call Me Al
Mar 11, 2017
24,619
18,006
56
USA
✟465,682.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
Depends on the models/protocols being used.

Did you have any specific example scenarios in mind?
Specifically, if you are talking about something that you be specific.
It's not so much the answering of the calls, it's being able to quality-score and analyze them after the fact without having to rely on random sampling (which has numerous flaws in that sort of environment)

Based on the clients I've helped implement it for, they've seen noticeable improvements across all key metrics.
Metrics. Ugh. I'm reminded that the worst things about working come from the business world (and a I assume "management school").
is it perfect? Obviously not.

But neither is 20 people picking a few dozen random calls to listen to every day.

I can provide a tangible example.

For certain kinds of call center work, certain verbal disclosures are an absolute must (depending on the type of call they're handling), if they don't do it correctly and read it 100% verbatim, and a person decided to file a complaint, there are fines associated with that.

If a person is neglecting to do that or having issues doing it correctly, that's a red flag that more coaching and training is needed.


Their QA team caught very few of those instances happening with their random sampling approach. The AI tools leveraged were able to not only give the precise number (by agent), but the times of day and types of call it was more prevalent on. (something that would be well beyond the expectations and skillset of a $18/hour QA/review staff member)
I see. They don't want to understand the performance of their employees so much as give them all a robot supervisor to watch everything they do.
There are obviously other examples... but given what we know about the C-suite, they hate spending money, and they certainly don't like to spend money on something tanking their results, and they often implement these things in a more incremental approach (nobody is cutting over to these tools like the flip of the switch). So the fact that they're adopting it, sticking with it, and seeking to expand it once they see the outcomes, that means it's working.
 
Upvote 0

Lukaris

Orthodox Christian
Site Supporter
Aug 3, 2007
9,169
3,427
Pennsylvania, USA
✟1,039,860.00
Country
United States
Gender
Male
Faith
Eastern Orthodox
Marital Status
Single
President Donald Trump's administration has been heralding the construction of data centers to power artificial intelligence (AI) infrastructure across the country. But many red state residents are becoming increasingly angry about data centers' intrusion on their rural communities.
That's according to a Tuesday article by the Washington Post's Evan Halper entitled "The data center rebellion is here, and it's reshaping the political landscape," which reported that residents in deep-red states like Indiana, Oklahoma and elsewhere are showing up in droves to public hearings solely to speak out against proposed data center construction. The Post zeroed in on an ongoing conflict over a planned data center in Sand Springs, Oklahoma, where Gov. Kevin Stitt (R) has championed the project.
"We know Trump wants data centers and Kevin Stitt wants data centers, but these things don’t affect these people," Trump supporter Brian Ingram said. "You know, this affects us."
U.S. Secretary of Energy Chris Wright admitted that the data centers are unpopular as they have been tied to higher utility costs in adjacent communities, due to their immense power requirements. And the Post noted that Rep. Marjorie Taylor Greene (R-Ga.) has also railed against data centers due to both their electricity consumption and their draining of precious freshwater sources. MSN

Maybe there is hope they will realize what they voted for.

Keep in mind the Trump admin is fighting tooth and nail to prevent states from being to regulate these data centers, with the same vigor they are trying to prevent states from regulating gambling taking place in their own states.

Just what America needs, more degenerate gamblers.
On the state levels, regulation of AI on the technology & data centers is a bipartisan concern.


 
Upvote 0