• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • CF has always been a site that welcomes people from different backgrounds and beliefs to participate in discussion and even debate. That is the nature of its ministry. In view of recent events emotions are running very high. We need to remind people of some basic principles in debating on this site. We need to be civil when we express differences in opinion. No personal attacks. Avoid you, your statements. Don't characterize an entire political party with comparisons to Fascism or Communism or other extreme movements that committed atrocities. CF is not the place for broad brush or blanket statements about groups and political parties. Put the broad brushes and blankets away when you come to CF, better yet, put them in the incinerator. Debate had no place for them. We need to remember that people that commit acts of violence represent themselves or a small extreme faction.
  • We hope the site problems here are now solved, however, if you still have any issues, please start a ticket in Contact Us

  • The rule regarding AI content has been updated. The rule now rules as follows:

    Be sure to credit AI when copying and pasting AI sources. Link to the site of the AI search, just like linking to an article.

Red state residents lead growing rebellion against data centers that Trump loves

Hans Blaster

Call Me Al
Mar 11, 2017
24,641
18,015
56
USA
✟465,920.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
I think that's where there's some misunderstanding with regards to how it's leveraged.

The AI workflow that are processing it have already had detailed instructions injected in to the organization specific customized model.
OK.
Meaning, it's not just a guy uploading massive CSV files and asking a chat bot a generic question like "tell me stuff about this data"
Can we drop all of this talk about chat bots? Even the chat bots have sub-bots to do things like image generation.

What are the "AI" methods being used. I'm no expert, but I have been exposed to several via talks.
There's a human upstream injecting the model with detailed business rules, KPIs, etc... so it knows how to make the decisions.
We'll their not more dumb than I'd thought before I read your post.
The AI model can just get its arms around the massive data interpret where to join things together, can identify bad records and all that stuff quicker than a SQL report writer ever could.
Bad entry location/ error detection is a problem I've dealt with. I can see how trained models could be useful for that.
People are still thinking of AI in terms of people asking chatbots things, and robots moving stuff around in a warehouse. That's really only a small piece of the pie in terms of AI usage and computing power usage. The bigger footprint is stuff happening behind the scenes that nobody ever sees.

Amazon's Contact Lens and Lex are quite proficient at it.
They have software products too? (Definitely going on my 2029 anti-trust break up list.)
We have a client (that I won't name) that has a pretty large contact center ecosystem, and is handling about 30,000 calls per month. (ranging from quick 5 minute phone calls, up to calls spanning a half hour or more)
That doesn't seem like a lot. Even at your low end for long calls (30 minutes) a call worker can do 15 of those a day, or 2000 person-days per month. That's 100 full time operators. (Less if 24/7, somewhat more with some part-time employees.)
Their internal Quality/Verifications team only consists of about 20 people.
For a workforce of ~100 FTE answering calls, 20% for Quality verification sounds high.
Even if that team was doubled in size, there's no way they'd be able to listen to and analyze a large percentage of the calls. So before, they were relegated to randomly auditing wav recordings of phone calls, combined with pulling up ones where there was a customer who called in to complain about the rep they spoke with.
That's what random sampling is for.
And it's not a blind trust situations. It's not as if orgs are spinning these things up without scale up testing.

The proof of concept phase for those situations is almost always send a few small batches of calls up to it, and then compare what the tools rated it as compared to what the human rated it as. In over 95% of the calls, the review outcomes were in the same ballpark, and for the 5% that weren't, some of those differences were actually the human making a mistake or not catching something that happened on the call.
I'm still skeptical of "sentiment ratings". I find them inaccurate, to useless when asked directly about my sentiment about something, like the performance of a call operator.
 
Upvote 0

ThatRobGuy

Part of the IT crowd
Site Supporter
Sep 4, 2005
30,119
17,588
Here
✟1,586,657.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
US-Others
Can we drop all of this talk about chat bots? Even the chat bots have sub-bots to do things like image generation.

What are the "AI" methods being used. I'm no expert, but I have been exposed to several via talks.
Depends on the models/protocols being used.

Did you have any specific example scenarios in mind?
We'll their not more dumb than I'd thought before I read your post.
That doesn't seem like a lot. Even at your low end for long calls (30 minutes) a call worker can do 15 of those a day, or 2000 person-days per month. That's 100 full time operators. (Less if 24/7, somewhat more with some part-time employees.)
For a workforce of ~100 FTE answering calls, 20% for Quality verification sounds high.
It's not so much the answering of the calls, it's being able to quality-score and analyze them after the fact without having to rely on random sampling (which has numerous flaws in that sort of environment)
I'm still skeptical of "sentiment ratings". I find them inaccurate, to useless when asked directly about my sentiment about something, like the performance of a call operator.
Based on the clients I've helped implement it for, they've seen noticeable improvements across all key metrics.

is it perfect? Obviously not.

But neither is 20 people picking a few dozen random calls to listen to every day.

I can provide a tangible example.

For certain kinds of call center work, certain verbal disclosures are an absolute must (depending on the type of call they're handling), if they don't do it correctly and read it 100% verbatim, and a person decided to file a complaint, there are fines associated with that.

If a person is neglecting to do that or having issues doing it correctly, that's a red flag that more coaching and training is needed.


Their QA team caught very few of those instances happening with their random sampling approach. The AI tools leveraged were able to not only give the precise number (by agent), but the times of day and types of call it was more prevalent on. (something that would be well beyond the expectations and skillset of a $18/hour QA/review staff member)

There are obviously other examples... but given what we know about the C-suite, they hate spending money, and they certainly don't like to spend money on something tanking their results, and they often implement these things in a more incremental approach (nobody is cutting over to these tools like the flip of the switch). So the fact that they're adopting it, sticking with it, and seeking to expand it once they see the outcomes, that means it's working.
 
Upvote 0

Hans Blaster

Call Me Al
Mar 11, 2017
24,641
18,015
56
USA
✟465,920.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
I've been holding back my response to this post about the nature of "AI" data centers until my other conversations in the thread were "clear", but I'm going to do it now anyway.
I assumed we would be talking about the new data centres being built and not historical ones (Which partially explains the NIMBY phenom).
All the new data centres being built are much, MUCH larger than older ones.
New ones are built:

Traditional Data Centers vs. AI-Ready Data Centers: A New Era of Infrastructure
I read this article (and the ones posted by @Tuur and I have one significant question. First, these types of "data centers" have been typical at the top end of high performance computing (HPC) for 10-15 years: high power density and water cooling.

What stuck out from that article (and the others) was the term "HPC" to describe "AI data centers". For those who've been involved in HPC for a long time, we seen a lot of "pretenders" using "HPC" or "supercomputer" to describe their clusters incorrectly. The enduring characteristic of a supercomputer (HPC) is the fast internal network or interconnect that allows the whole thing to work together on a single task, from the Cray-1 of the past to the HPE Cray EX-series machines that are the "top 3" and 6 of the top 10 supercomputers on the planet.

These articles talk about bandwidth to storage (I/O) systems in these types of "AI" data centers, but it is not made clear if these systems have fast internal interconnects. If they are not needed (and I don't know what the work load that would require them could be), they are very expensive. (Typically the most expensive part of a supercomputer and are built with custom ASICs.)

Do "AI" training loads really involve large amounts of inter-node communication? I don't see evidence of that in my search attempts. What kind of interaction am I talking about? Let's look at an example derived from my own work in numerical simulation on supercomputers.

Start by spreading the problem across a couple dozen compute cabinets (racks, if you like), with a couple thousand nodes, and about 50,000 individual processes (MPI ranks). Each rank contains a part of the simulation and at the end of a step it exchanges data with some of the other ranks. A total of a few hundred GB is transmitted. Now, that's not too much but it is the timing requirements that matter. It needs to push that data in a burst lasting a few 10 ms (0.01 s) because the compute stage that follows only lasts a couple 100 ms (0.1s) before the communication need to be done again. This cycle repeats a couple million times to complete the simulation. (Not all in one run, output is saved every few minutes for restart and analysis. Individual runs run several hours to a full day.) At the same time, a couple similarly sized jobs are running on other nodes with their own demanding communication patterns sharing the same network hardware.

Does "AI training" really need internal communication hardware of that kind? Do they even have it?

Clearly the new centres are not really comparable to the old ones. You are welcome to continue to trot out impressive "systems related" numbers but it doesn't really matter if the localized impact does NOT really reflect older trends.

And what communities getting these centres are expecting are SIGNIFICANT drains on resources.

Can you accept that this is a reasonable concern?


In my province the town of Olds Alberta is getting a data centre...9000 people. You think their electricity and water usage is NOT going to be impacted by a data centre? Pffffffft.
The burst of "data center" construction, mostly tied to "AI", is large enough that people are noticing it.

The "x" AI facility in Memphis is a notorious polluter from onsite gas turbine generators (in excess of their permitted usage) with power demands about 30 times larger than the 10-30 MW usage of those top 10 supercomputers.
 
Upvote 0

Hans Blaster

Call Me Al
Mar 11, 2017
24,641
18,015
56
USA
✟465,920.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
Depends on the models/protocols being used.

Did you have any specific example scenarios in mind?
Specifically, if you are talking about something that you be specific.
It's not so much the answering of the calls, it's being able to quality-score and analyze them after the fact without having to rely on random sampling (which has numerous flaws in that sort of environment)

Based on the clients I've helped implement it for, they've seen noticeable improvements across all key metrics.
Metrics. Ugh. I'm reminded that the worst things about working come from the business world (and a I assume "management school").
is it perfect? Obviously not.

But neither is 20 people picking a few dozen random calls to listen to every day.

I can provide a tangible example.

For certain kinds of call center work, certain verbal disclosures are an absolute must (depending on the type of call they're handling), if they don't do it correctly and read it 100% verbatim, and a person decided to file a complaint, there are fines associated with that.

If a person is neglecting to do that or having issues doing it correctly, that's a red flag that more coaching and training is needed.


Their QA team caught very few of those instances happening with their random sampling approach. The AI tools leveraged were able to not only give the precise number (by agent), but the times of day and types of call it was more prevalent on. (something that would be well beyond the expectations and skillset of a $18/hour QA/review staff member)
I see. They don't want to understand the performance of their employees so much as give them all a robot supervisor to watch everything they do.
There are obviously other examples... but given what we know about the C-suite, they hate spending money, and they certainly don't like to spend money on something tanking their results, and they often implement these things in a more incremental approach (nobody is cutting over to these tools like the flip of the switch). So the fact that they're adopting it, sticking with it, and seeking to expand it once they see the outcomes, that means it's working.
 
Upvote 0

Lukaris

Orthodox Christian
Site Supporter
Aug 3, 2007
9,170
3,428
Pennsylvania, USA
✟1,040,208.00
Country
United States
Gender
Male
Faith
Eastern Orthodox
Marital Status
Single
President Donald Trump's administration has been heralding the construction of data centers to power artificial intelligence (AI) infrastructure across the country. But many red state residents are becoming increasingly angry about data centers' intrusion on their rural communities.
That's according to a Tuesday article by the Washington Post's Evan Halper entitled "The data center rebellion is here, and it's reshaping the political landscape," which reported that residents in deep-red states like Indiana, Oklahoma and elsewhere are showing up in droves to public hearings solely to speak out against proposed data center construction. The Post zeroed in on an ongoing conflict over a planned data center in Sand Springs, Oklahoma, where Gov. Kevin Stitt (R) has championed the project.
"We know Trump wants data centers and Kevin Stitt wants data centers, but these things don’t affect these people," Trump supporter Brian Ingram said. "You know, this affects us."
U.S. Secretary of Energy Chris Wright admitted that the data centers are unpopular as they have been tied to higher utility costs in adjacent communities, due to their immense power requirements. And the Post noted that Rep. Marjorie Taylor Greene (R-Ga.) has also railed against data centers due to both their electricity consumption and their draining of precious freshwater sources. MSN

Maybe there is hope they will realize what they voted for.

Keep in mind the Trump admin is fighting tooth and nail to prevent states from being to regulate these data centers, with the same vigor they are trying to prevent states from regulating gambling taking place in their own states.

Just what America needs, more degenerate gamblers.
On the state levels, regulation of AI on the technology & data centers is a bipartisan concern.


 
Upvote 0

Tuur

Well-Known Member
Oct 12, 2022
3,440
1,731
Southeast
✟118,083.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
The "x" AI facility in Memphis is a notorious polluter from onsite gas turbine generators (in excess of their permitted usage) with power demands about 30 times larger than the 10-30 MW usage of those top 10 supercomputers.
This struck me as humorous in a “compared to what?” sort of way. The “what” being diesel and coal. Natural gas and propane are cleaner. Not squeaky clean, just cleaner than other combustibles.
 
Upvote 0

Tuur

Well-Known Member
Oct 12, 2022
3,440
1,731
Southeast
✟118,083.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
My roll-your-own AI tinkering never got above two layers of weighted paths, and that was as old as Byte magazine. Back then, it was all about paths and weights and running simulations again and again until it was right or close enough. In simple tinkering, it wasn’t as calculation intensive as tinkering with Mandelbrot sets (because they looked pretty). How neural nets are set up now or even if neural nets are still used, I can’t say. Repetitively running simulations to train AI could become calculation intensive if they still use neural nets and the nets become more complex.
 
Upvote 0

RocksInMyHead

God is innocent; Noah built on a floodplain!
May 12, 2011
10,194
11,012
PA
✟472,278.00
Country
United States
Faith
Catholic
Marital Status
Single
Politics
US-Democrat
There are obviously other examples... but given what we know about the C-suite, they hate spending money, and they certainly don't like to spend money on something tanking their results, and they often implement these things in a more incremental approach (nobody is cutting over to these tools like the flip of the switch). So the fact that they're adopting it, sticking with it, and seeking to expand it once they see the outcomes, that means it's working.
What are you even talking about? C-suite loves to spend money - it's one of their primary purposes. Especially other people's money, and the amount of investment capital floating around the AI space today means there's plenty of other people's money for C-suite execs to spend on it.

What they do hate is missing out on industry trends relative to their competitors, and AI is the biggest industry trend on the planet (for just about every industry) right now. Combine those two things - a massive industry trend and seemingly unlimited capital to spend - and of course they're going to adopt it, stick with it, and continue investing.

I don't doubt that they're also seeing results, but the question that everyone seems to be avoiding is the long-term outlook. Sure, they can eliminate entry-level positions for significant cost-savings, but what does that mean for the workforce in 5 years? 10 years? 20 years? What happens when people realize that none of the AI companies will ever be profitable on the current business model? Prices go up, and suddenly, you're paying more for that service than you were for what it replaced, and you can't go back because you're totally reliant on it. This has happened with pretty much every tech startup (Uber, DoorDash, Netflix/streaming services, etc). They get you in the door with a low cost of entry and cool features, make themselves an indispensable part of your life, push out competitors, then realize that they'll never be profitable and jack up the price, reduce services, introduce ads, and generally destroy what made them an attractive product initially. Only, there aren't any alternatives to fall back on because they pushed them all out, so you're stuck if you want that type of service.
 
Last edited:
Upvote 0

ThatRobGuy

Part of the IT crowd
Site Supporter
Sep 4, 2005
30,119
17,588
Here
✟1,586,657.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
US-Others
What are you even talking about? C-suite loves to spend money - it's one of their primary purposes. Especially other people's money, and the amount of investment capital floating around the AI space today means there's plenty of other people's money for C-suite execs to spend on it.
They love spending money on their own salaries and bonuses.

They nickel and dime people on raises, and quibble about having to replace the coffee machine in the breakroom, and constantly try to find ways to "do more with less" to keep the bottom line looking clean.

They wouldn't be tossing huge sums at these AI investments if they weren't working, or they'd do "conspicuous investment" (sort of like conspicuous consumption), where they'd attach the cheapest tier of AI tools to one of their product offerings just so they could put something about AI on their website to look fancy & high tech'ish.
 
Upvote 0

RocksInMyHead

God is innocent; Noah built on a floodplain!
May 12, 2011
10,194
11,012
PA
✟472,278.00
Country
United States
Faith
Catholic
Marital Status
Single
Politics
US-Democrat
They nickel and dime people on raises, and quibble about having to replace the coffee machine in the breakroom, and constantly try to find ways to "do more with less" to keep the bottom line looking clean.
That's all middle/upper management stuff, not C-suite.
They wouldn't be tossing huge sums at these AI investments if they weren't working,
Companies dump huge sums into investments that don't work out every day. I'm sure they're seeing results, but results do not mean that everything is working (and will continue to work) as advertised.
or they'd do "conspicuous investment" (sort of like conspicuous consumption), where they'd attach the cheapest tier of AI tools to one of their product offerings just so they could put something about AI on their website to look fancy & high tech'ish.
There's a LOT of this going around too.
 
  • Agree
Reactions: Hans Blaster
Upvote 0

Tuur

Well-Known Member
Oct 12, 2022
3,440
1,731
Southeast
✟118,083.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
What happens when people realize that none of the AI companies will ever be profitable on the current business model?
I’ll go along with “most” but not “none.” Every new tech, from steam engines to PCs, has seen the same thing. A good tip-off of something to avoid is a bad P/E ratio. A bad P/E ratio in start-up tech is an indication of more hope than substance.

For all the blather about AI, I’ve yet to see an accessment of the actual intelligence. How does It compare to a cockroach is a valid question. Saw some place today that current AI tech does poorly when a three tower Hanoi Towers puzzle becomes a four tower puzzle.
 
Upvote 0

Tuur

Well-Known Member
Oct 12, 2022
3,440
1,731
Southeast
✟118,083.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Something on my mind ever since mentioning Mandelbrot sets:

Steve Ciarcia had a column in Byte called Circuit Cellar before setting out with his own publication by the same name. One article in the Byte days was using a Motorola chip to build a Mandelbrot set generator. The significance here was the thing passed a Cray like it was standing still at generating the sets. Could the same thing be done for AI? Are they doing that now? Don’t know. It might not be cost effective to design a computer to perform a specific task, but it’s an option for all sorts of things.
 
Upvote 0

RocksInMyHead

God is innocent; Noah built on a floodplain!
May 12, 2011
10,194
11,012
PA
✟472,278.00
Country
United States
Faith
Catholic
Marital Status
Single
Politics
US-Democrat
I’ll go along with “most” but not “none.”
Under the current business model, I can't see any AI companies turning a profit. They've got trillions in capex commitments over the next several years on top of general overhead, and the highest-grossing companies are still in single-digit billions in terms of revenue. That means something significant will have to change for profit to be a real prospect. And if you're building your business around the current AI market, there's a real chance that a change could completely upend your business model.
Every new tech, from steam engines to PCs, has seen the same thing. A good tip-off of something to avoid is a bad P/E ratio. A bad P/E ratio in start-up tech is an indication of more hope than substance.
Anthropic and OpenAI (the two main companies focused purely on AI) aren't even on the stock market yet, though there are rumors they will have IPOs this year. So there isn't even a P/E ratio available to look at. Microsoft, Apple, and Google all do too many other things for their stocks to reflect the viability of AI as a business model.
For all the blather about AI, I’ve yet to see an accessment of the actual intelligence. How does It compare to a cockroach is a valid question. Saw some place today that current AI tech does poorly when a three tower Hanoi Towers puzzle becomes a four tower puzzle.
LLMs (which is all we have on the market right now) are not the least bit intelligent. They can produce an illusion of intelligence through conversation, but all they're doing is rearranging words in what they calculate to be a likely correct way.
 
  • Like
Reactions: Hans Blaster
Upvote 0

Hans Blaster

Call Me Al
Mar 11, 2017
24,641
18,015
56
USA
✟465,920.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
This struck me as humorous in a “compared to what?” sort of way. The “what” being diesel and coal. Natural gas and propane are cleaner. Not squeaky clean, just cleaner than other combustibles.
Compared to before the "X" AI facility (I think it is called Collasus) was built in Memphis. X is running generators without permits and in excess of their permits. Smog in Memphis is up (according to the articles I read yesterday) about 50%.

(The generators in question are Natural Gas.)
 
Upvote 0

Larniavc

"Larniavc sir, how are you so smart?"
Jul 14, 2015
16,478
9,998
53
✟427,468.00
Country
United Kingdom
Gender
Male
Faith
Atheist
Marital Status
Married
Politics
UK-Liberal-Democrats
President Donald Trump's administration has been heralding the construction of data centers to power artificial intelligence (AI) infrastructure across the country. But many red state residents are becoming increasingly angry about data centers' intrusion on their rural communities.
That's according to a Tuesday article by the Washington Post's Evan Halper entitled "The data center rebellion is here, and it's reshaping the political landscape," which reported that residents in deep-red states like Indiana, Oklahoma and elsewhere are showing up in droves to public hearings solely to speak out against proposed data center construction. The Post zeroed in on an ongoing conflict over a planned data center in Sand Springs, Oklahoma, where Gov. Kevin Stitt (R) has championed the project.
"We know Trump wants data centers and Kevin Stitt wants data centers, but these things don’t affect these people," Trump supporter Brian Ingram said. "You know, this affects us."
U.S. Secretary of Energy Chris Wright admitted that the data centers are unpopular as they have been tied to higher utility costs in adjacent communities, due to their immense power requirements. And the Post noted that Rep. Marjorie Taylor Greene (R-Ga.) has also railed against data centers due to both their electricity consumption and their draining of precious freshwater sources. MSN

Maybe there is hope they will realize what they voted for.

Keep in mind the Trump admin is fighting tooth and nail to prevent states from being to regulate these data centers, with the same vigor they are trying to prevent states from regulating gambling taking place in their own states.

Just what America needs, more degenerate gamblers.
I think calling it infrastructure is incredibly misleading. Maybe sort the actual infrastructure first?
 
Upvote 0

Pommer

CoPacEtiC SkEpTic
Sep 13, 2008
24,057
14,692
Earth
✟282,770.00
Country
United States
Gender
Male
Faith
Deist
Marital Status
In Relationship
Politics
US-Democrat
LLMs (which is all we have on the market right now) are not the least bit intelligent. They can produce an illusion of intelligence through conversation, but all they're doing is rearranging words in what they calculate to be a likely correct way.
Hey, this is my schtick!
 
Upvote 0

Hans Blaster

Call Me Al
Mar 11, 2017
24,641
18,015
56
USA
✟465,920.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
For all the blather about AI, I’ve yet to see an accessment of the actual intelligence. How does It compare to a cockroach is a valid question. Saw some place today that current AI tech does poorly when a three tower Hanoi Towers puzzle becomes a four tower puzzle.
I saw an article mentioning the Towers of Hanoi puzzle. Just a 3-tower version but with a tall "stack" and the AI (I think it was an LLM ChatBot of the latest kind) would do the start and then just skip to the end. Even when fed the actual algorithm it wouldn't finish properly. Another post (I think it was on social media) was talking about struggles getting a chatbot to count *all* of the integers between 1 and 100. (It would count to the low double digits and the just go "and so on to 100" or something like that.) In both cases I suspect was that the training set didn't contain the skipped portion (who writes out all 100 integers or every move in the Towers) and it couldn't generate the other parts.
 
Upvote 0

Pommer

CoPacEtiC SkEpTic
Sep 13, 2008
24,057
14,692
Earth
✟282,770.00
Country
United States
Gender
Male
Faith
Deist
Marital Status
In Relationship
Politics
US-Democrat
I’ll go along with “most” but not “none.” Every new tech, from steam engines to PCs, has seen the same thing. A good tip-off of something to avoid is a bad P/E ratio. A bad P/E ratio in start-up tech is an indication of more hope than substance.

For all the blather about AI, I’ve yet to see an accessment of the actual intelligence. How does It compare to a cockroach is a valid question. Saw some place today that current AI tech does poorly when a three tower Hanoi Towers puzzle becomes a four tower puzzle.
Heard a “test” for the LLMs, (from Harry Shearer’s Le show), where the AIs were queried about walking vs driving to a car wash that was 150 meters away. All initially recommended walking, then driving when prompted “but how do I wash my car”?
 
Upvote 0

Tuur

Well-Known Member
Oct 12, 2022
3,440
1,731
Southeast
✟118,083.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Compared to before the "X" AI facility (I think it is called Collasus) was built in Memphis. X is running generators without permits and in excess of their permits. Smog in Memphis is up (according to the articles I read yesterday) about 50%.

(The generators in question are Natural Gas.)
Smog will increase in cities this time of year from wood burning heat, including pellets. That's why some cities have regulated wood burning heat for air quality issues since the 1970s and 1980s boom that saw wood as supplemental heat to help reduce heating costs. Given what passes for reporting, I wouldn't put it past them to play the game of "They're running generators without a permit and smog is up" simply because X is involved. That, or not realizing that wood heat contributes to smog in winter. So does heating oil. So does gas heat.

Let's see, though. Smog is characterized by smoke and fog, but there's more to it than that, as vehicle exhaust can contribute to it. Quick check attributes that to the interaction of sunlight with vehicle exhaust. So we're looking at NOx . Natural gas produces less NOx than, say, heating oil.

Note that the breathless reporting never asks if the generators have any sort of catalytic converter on the exhaust. But that would require knowing a little bit about it, a willingness to ask questions, and a willingness not to indulge in propaganda.

When we look into the generators, I come across the cousin of an old friend. No, not Musk; the generators themselves. Musk is using 35 semitrailer sized generators. I've had to run the diesel version before for a short time when we had some to provide power during peak demand. The natural gas versions have an output of 2.5 MW of power. So I can envision the set-up a little. Not that this has a thing to do with the output. At 2.5 MW each, that's a combined 87.5 MW output. That's far less than the figure of 422 MW that's bandied about.

Note that 15 of the generators don't have permits. Some of the breathless reporting implies that all 35 aren't permitted. Interesting, that.

He's using VoltaGrid. Here's a link: VoltaGrid - Home
If we go here, we find some data: VoltaGrid - Our Fleet

NOx as low as 0.155 pounds per MWh. 87.5 x 24 x 0.155 = 325.5 pounds of NOx per day. A check comes up with a semitruck producing 275 to 300 g of NOx per hour, which is 0.606 pounds per hour per truck. A single truck running within the city limit of Memphis for 24 hours would produce 145.44 pounds of NOx per day. So the 35 generators are putting out about the equivalent of almost 2 and a quarter semitrucks per day.

And this, we are told, is increasing smog in Memphis.

Like I said, not squeaky clean, but cleaner than diesel and coal.
 
Upvote 0

Hans Blaster

Call Me Al
Mar 11, 2017
24,641
18,015
56
USA
✟465,920.00
Country
United States
Gender
Male
Faith
Atheist
Marital Status
Private
Politics
US-Democrat
Smog will increase in cities this time of year from wood burning heat, including pellets. That's why some cities have regulated wood burning heat for air quality issues since the 1970s and 1980s boom that saw wood as supplemental heat to help reduce heating costs. Given what passes for reporting, I wouldn't put it past them to play the game of "They're running generators without a permit and smog is up" simply because X is involved. That, or not realizing that wood heat contributes to smog in winter. So does heating oil. So does gas heat.
I didn't say anything about this winter. This has been going on for a year or more.
Let's see, though. Smog is characterized by smoke and fog, but there's more to it than that, as vehicle exhaust can contribute to it. Quick check attributes that to the interaction of sunlight with vehicle exhaust. So we're looking at NOx . Natural gas produces less NOx than, say, heating oil.

Note that the breathless reporting never asks if the generators have any sort of catalytic converter on the exhaust. But that would require knowing a little bit about it, a willingness to ask questions, and a willingness not to indulge in propaganda.

When we look into the generators, I come across the cousin of an old friend. No, not Musk; the generators themselves. Musk is using 35 semitrailer sized generators. I've had to run the diesel version before for a short time when we had some to provide power during peak demand. The natural gas versions have an output of 2.5 MW of power. So I can envision the set-up a little. Not that this has a thing to do with the output. At 2.5 MW each, that's a combined 87.5 MW output. That's far less than the figure of 422 MW that's bandied about.

Note that 15 of the generators don't have permits. Some of the breathless reporting implies that all 35 aren't permitted. Interesting, that.

He's using VoltaGrid. Here's a link: VoltaGrid - Home
If we go here, we find some data: VoltaGrid - Our Fleet

NOx as low as 0.155 pounds per MWh. 87.5 x 24 x 0.155 = 325.5 pounds of NOx per day. A check comes up with a semitruck producing 275 to 300 g of NOx per hour, which is 0.606 pounds per hour per truck. A single truck running within the city limit of Memphis for 24 hours would produce 145.44 pounds of NOx per day. So the 35 generators are putting out about the equivalent of almost 2 and a quarter semitrucks per day.
Who knows how efficient or pollution controlled they are. They haven't been regulated or monitored.
And this, we are told, is increasing smog in Memphis.

Like I said, not squeaky clean, but cleaner than diesel and coal.
 
Upvote 0