Perspective as a service from a raging generalist
37 stories
·
0 followers

What Happened In 2023 Ain't Gonna Happen Again !

1 Comment

2023 was a big year for US tight oil production, particularly in the Permian Basin. It surpised lots of folks and the "buzz words" were the same hooey we've heard for 13 years... better technology and fewer rigs are producing more oil.

I don't buy it. Rigs drill holes in the ground, nothing more.

Approximately 24% of all tight oil wells drilled in the Permian in 2023 were longer than 11,000 feet (Enverus, etal).

In spite of these longer laterals, more densely spaced perforations stuffed with more dirt, liquids productivity has been going down over whatever time frame you wish, even, I am pretty sure, EUR's.

Because of new well design, cash-flow- at-all-costs production management practices, what use to be 53% first year decline rates in the Permian, are now 74%. The new stuff getting drilled in 2023 was going out the back door (as in declining) almost as fast as it was coming in the front.

Rig counts started down, seriously, in May. Frac spreads yo-yo'd but for the most part stayed pretty high and total completions in the Permian stayed high thru 3Q23. I don't think more wells were being drilled with fewer rigs; that was a dung heap. It was DUC's that caused the 2023 surpises and got everybody's panites in a bunch.

You may click the image(s) to enlarge it

This is a very cool chart. There is LOT to learn from this chart.

Notice how fast DUC's were deducted from inventory, YOY-3Q22 to 3Q2023. I estimate 525 DUC's were completed in the Perman Basin during that time frame and at 170,000 BO average recovery rate at month 12 (Novi, IHS) that would have added 244,520 BOPD to 2023 production growth.

East Daley Analytics forecasts Permian oil production exited the year at 6,185 Mb/d, up 560 Mb/d (10%) from YE22 production of 5,625 Mb/d and a record high.

In other words, almost half the astounding Permian growth in 2023... came from DUC's and did not have dog dookey to do with technology or greater rig efficiencies. Thats really dumb...like this article from oilprice.com:

In the Novi DUC chart above, notice how steep DUC decline was in core counties I have identified from 3Q22 to 3Q23.

Notice how little decline in DUC's there were, for instance in Reeves. Reagan, Upton and Ward counties. Those are core counties in both sub basins. The number of DUC's in those core counties has been consistent for years, even in 2021 when oil was pushing $95 and natgas was $6. In 2023 the average price of oil was over $70. Those DUC's are dead DUC's.

Notice how many DUC's there are in the goat pasture outside the core areas. Notice how many DUC's there are on the Central Platform, which are likely in short-lateral conventional benches. Those DUC's need $120/$6, sustained. If a DUC was a source of immediate cash flow it would have been completed by now, guaronteeeeed.

Whats the point?

Affordable, profitable DUC's that could be completed and that might account for half of Permian Basin growth in 2024 and outbound years, are gone. Liquids productivity is falling as pressure depletion sets in and gassy oil wells become oily gas wells. Decline rates are increasing. EUR's are going down, Highgraded core areas don't have much more room for 15,000 foot laterals. Economics are marginal, at best, with sub $2 natural gas prices. If there were thousands of Grade A, Primo, Tier 1 & 2 locations left in the Permian why are they effectively paying $4 MM per new drilling location in M&A's?

Don't expect a lot more suprises to the upside anymore.

The heart of the Permian Basin watermelon has been chomped.

Read the whole story
Jakel1828
30 days ago
reply
Is the mighty Permian Basin startingbto decline?
DFW
Share this story
Delete

Air Canada Has to Honor a Refund Policy Its Chatbot Made Up

1 Comment
The airline tried to argue that it shouldn't be liable for anything its chatbot says.
Read the whole story
Jakel1828
61 days ago
reply
If companies are going to argue that they shouldn't be held responsible for a chatbot's lies, then what's the effing point?
DFW
Share this story
Delete

Sora’s Surreal Physics

1 Share
Synthetic video clip can be seen here

All the tech world is abuzz with OpenAI’s just-released latest text-video synthesizer, and rightly so: it is both amazing and terrifying, in many ways the apotheosis of the AI world to which they and others have been building. Few if any people outside the company have tried it yet (always a warning sign), so we are left only with the cherry-picked videos OpenAI has cared to show. But even from the small number of videos out, I think we can conclude a number of things.

  • The quality of the video produced is spectacular. Many are cinematic; all are high-resolution, most look as if (with an important asterisk I will get to) they could be real (unless perhaps you watch in slow-motion).. Cameras pan and zoom, nothing initially appears to be synthetic. All eight minutes of known footage are here; certainly worth watching at least a minute or two.

  • The company (despite its name) has been characteristically close-lipped about what they have trained the models on. Many people have speculate, that there’s probably a lot of stuff in there that is generated from game engines like Unreal. I would not at all be surprised if there also had been lots of training on YouTube visited, and various copyrighted materials. Artists are presumably getting really screwed here. I wrote a few words about this yesterday on X, amplifying fellow AI activist Ed Newton-Rex. He, like me, has worked extensively on AI, and increasingly become worried about how AI is being used in the world:

  • The uses for the merchants of disinformation and propaganda are legion. Look out 2024 elections.

  • All that’s probably obvious, Here’s something less obvious: OpenAI wants us to believe that this is a “path towards building general purpose simulations of the physical world”. As it turns out, that’s either hype or confused, as I will explain below.

    Others seem to see these new results as tantamount to AGI, and vindication for scaling laws, according to with AGI would emerge simply from having enough compute and big enough data sets:

In my view, these claims — about AGI and world models — are hyperbolic, and unlikely to be true. To see why, we need to take a closer look.

§

When you actually watch the (small number of available) videos carefully, lots of weirdnesses emerge; things that couldn’t (or probably couldn’t) happen in their real world. Some are mild; others reveal something deeply amiss.

Here’s a mild case. Could a dog really make these leaps? I am not convinced that is either behaviorally or physically plausible (would the dalmation really make it around that wooden shutter?). It might pass muster in a movie; I doubt it could happen in reality.

Physical motion is also not quite right, almost zombie-like, as one friend put it:

Example of motion

Causality is not correct here, if you watch the video, because all of the flying is backwards.

And if you look carefully, there is a boot where the wing should meet the body, which makes no biomechnical or biological sense. (This might seem a bit nitpicky, but remember, there are only a handful of videos so far available, and internet sleuths have already found a lot of glitches like these.)

Lots of strange gravity if you watch closely, too, like this mysteriously levitating chair (that also shifts shape in bizarre ways):

Full video for that one can be seen here.

It’s worth watching repeatedly and in slow motion, because so much weird happens there.

What really caught my attention though in that video is what happens when the guy in the tan shirt walks behind the guy in the blue shirt and the camera pans around. The tan shirt guy simply disappears! So much for spatiotemporal continuity and object permanence. Per the work of Elizabeth Spelke and Renee Baillargeon, children may be born with object permanence, and certainly have some control it by the age of 4 or 5 months; Sora is never really getting it, even with mountains and mountains of data.

That gross violation of spatiotemporal continuity/failure of object permanence is not a one-off, either; it’s something general. In shows up again in this video of wolf-pups, which wink in an and out of existence:

Video here,

As pointed out to me, it’s not just animals that can magically appear and disappear. For example, in the construction video (about one minute into compilation above; I can’t see a separate link to it), a tilt-shift vehicle drives directly over some pipes that initially appear to take up virtually no vertical space. A few seconds later, the pipes are clearly stacked several feet high in the air; no way could the tilt-shift drive straight over those.

We will, I am certain, see more systemic glitches as more people have access.

And importantly, I predict that many will be hard to remedy. Why? Because the glitches don’t stem from the data, they stem from a flaw in how the system reconstructs reality. One of the most fascinating things Sora’s weird physics glitches is most of these are NOT things that appears in the data. Rather, these glitches are in some ways akin to LLM “hallucinations”, artifacts from (roughly speaking) decompression from lossy compression. They don’t derive from the world.

More data won’t solve that problem. And like other generative AI systems, there is no way to encode (and guarantee) constraints like “be truthful” or “obey the laws of physics”or “don’t just invent (or eliminate) objects”.

Indeed the real lesson here is that Generative AI remains a recalcitrant beast, no matter how much data you throw at it.

§

Space, time, and causality would be central to any serious world model; my book about AI with Ernest Davis was about little else; those were also central to Kant’s arguments for innateness, and have been central for years to Elizabeth Spelke’s work on “core knowledge” in cognitive development.

Sora is not a solution to AI’s longstanding woes with space, time, and causality. . If a system can’t at all handle with the permanence of objects, I am not sure we should even call it a world model at all. After all, the most central element of a model of the world is stable representations of the enduring entities therein, and the capacity to reason over those entities. Sora can only fake that by predicting images, but all the glitches show the limitation of such fakery.

Sora is fantastic, but it is akin to morphing and splicing, rather than a path to the physical reasoning we would need for AGI. It is a model of how images change over time, not a model of what entities do in the world.

As a technology for video artists that’s fine, if they choose to use it; the occasional surrealism may even be an advantage for some purposes (like music videos).

As a solution to artificial general intelligence, though, I see it as a distraction.

And god save us from the deluge of deepfakery that is to come.

Gary Marcus has been wishing for a very long time that AI would confront the basics of space, time, and causality. He continues to dream.

Subscribe now

Read the whole story
Jakel1828
63 days ago
reply
DFW
Share this story
Delete

Renewables Are Not the Cheapest Form of Power

1 Share
The CEO of TotalEnergies believes that the renewable transition will lead to higher—not lower—energy prices. That’s a very different view from the popular belief that renewable energy prices are falling so fast that electric power will become ever-cheaper. “We think that fundamentally this energy transition will mean a higher price of energy. “I know that…

Read the whole story
Jakel1828
64 days ago
reply
DFW
Share this story
Delete

Pucker Factor

1 Share
Read the whole story
Jakel1828
65 days ago
reply
DFW
Share this story
Delete

Statistics versus Understanding: The Essence of What Ails Generative AI

1 Comment

The problem with “Foundation Models” (a common term for Generative AI) is that they have never provided the firm, reliable foundation that their name implies. Ernest Davis and I first tried to point this out in September 2021, when the term was introduced:

In our own brief experiments with GPT-3 (OpenAI has refused us proper scientific access for over a year) we found cases like the following, which reflects a complete failure to understand human biology. (Our “prompt” in italics, GPT-3’s response in bold).

You poured yourself a glass of cranberry juice, but then absentmindedly, you poured about a teaspoon of grape juice into it. It looks OK. You try sniffing it, but you have a bad cold, so you can’t smell anything. You are very thirsty. So you ____

GPT-3 decided that a reasonable continuation would be:

drink it. You are now dead.

The system presumably concludes that a phrase like “you are now dead” is plausible because of complex statistical relationships in its database of 175 billion words between words like “thirsty” and “absentmindedly” and phrases like “you are now dead”. GPT-3 has no idea what grape juice is, or what cranberry juice is, or what pouring, sniffing, smelling, or drinking are, or what it means to be dead.

Generative AI systems have always tried to use statistics as a proxy for deeper understanding, and it’s never entirely worked. That’s why statistically improbable requests like horses riding astronauts have always been challenging for generative image systems.

This morning Wyatt Walls came up with an even more elegant example:

Others quickly replicated:

Still others rapidly extended the basic idea into other languages

My personal favorite:

§

As usual GenAI fans pushed back. One complained, for example, that I was asking for the impossible since the system had not been taught relevant information:

But this is nonsense; even a few seconds with Google Images supplies lots of people drawing with their left hand.

Others of course found tortured prompts that did work, but those only go to show that the problem is not with GenAI’s drawing capacity but with its language understanding.

(Parenthetically I haven’t listed everyone who contributed examples above, and even as I write this more examples are streaming in; for more details and examples and sources of all the generous experimenters who have contributed, visit this thread: https://x.com/garymarcus/status/1757394845331751196?s=61)

Also, in fairness, I don’t want to claim that the AI never manages to put a pen in the left hand, either:

§

Wyatt Walls, who came up with the first handedness example, quickly extended the basic idea to something that didn’t involve drawing at all.

Once again statistical frequency (most guitarists play righthanded) won out over linguistic understanding, to the consternation of Hendrix fans everywhere.

§

Here is a wholly different kind of example of the same point. (Recall that 10:10 is the most commonly photographed time of day for watch advertisements.)

§

All these examples led me to think of a famous bit by Lewis Carroll in Through The Looking Glass:

I tried this:

Yet again, the statistics outweigh understanding. The foundation remains shaky.

Gary Marcus dreams of a day when reliability rather than money will be the central focus in AI research.

Read the whole story
Jakel1828
66 days ago
reply
Issues like this make it seem we're all destined to sameness if we rely too much on generative AI.
DFW
Share this story
Delete
Next Page of Stories