#mooreslaw

waynerad@diasp.org

"Intel unveiled a new roadmap that includes a new 14A node, the industry's first to use High-NA EUV, here at its Intel Foundry Services Direct Connect 2024 event."

Intel no longer uses "nanometers" to refer to its "process nodes", so I don't know what "14A" means, but, there is a quote somewhere of Intel CEO Pat Gelsinger saying 14A produces "1.4 nanometer technology." Maybe "14A" means 14 angstroms.

The other term in there is "High-NA EUV". "NA" stands for "numerical aperture". But to understand the significance of that we have to take a few steps back.

The company that makes the semiconductor manufacturing equipment is ASML (Advanced Semiconductor Materials Lithography).

Chips are made through a process called photolithography, which involves shining light through a chip design in such a way that it is miniaturized, and through a process using a lot of complicated chemistry, that pattern can be etched into the surface of the silicon and turned into an electronic circuit. These circuits have gotten so small that visible light has wavelengths too big to make the chip. Chipmakers predictably went to ultraviolet light, which has shorter wavelengths. That worked for a time, but suddenly a problem came up, which is that air was opaque to the wavelengths they wanted to use.

To us, we think of air as transparent, and for the visible wavelengths that our eyes use, it is pretty much perfectly transparent. But it is not transparent at all wavelengths. At certain ultraviolet wavelengths, it's opaque like black smoke.

This is why the semiconductor industry had to make the sudden jump from using lasers that emit light at 193 nanometers to lasers that emit light at 13.5 nanometers. (13.5 was chosen because people just happened to know how to make light at that frequency with a tin plasma laser.) Jumping the chasm from 193 to 13.5 jumps across the wavelengths where air is opaque. 193 has been called "deep ultraviolet", or DUV. 13.5 is called "extreme ultraviolet", or EUV. So whenever you see "EUV", which we see here in the phrase "Nigh-NA EUV", that's what it's talking about.

Making this jump required rethinking all the optics involved in making chips. Mainly this involved replacing all the lenses with mirrors. Turns out at 13.5 nanometers, it's easier to do optics with reflective mirrors than transparent lenses.

Besides decreasing the wavelength (and increasing the frequency) of the light, what else can be done?

It turns out there's two primary things that determine the limit of the size you cat etch: the light wavelength and the numerical aperture. There's some additional factors that have to do with the chemistry you're using for the photoresists and so fourth, but we'll not concern ourselves with those factors at the moment.

So what is numerical aperture? If you're a photographer, you probably already know, but it has to do with the angle at which a lens can collect light.

"The numerical aperture of an optical system such as an objective lens is defined by:

NA = n sin(theta)

where n is the index of refraction of the medium in which the lens is working (1.00 for air, 1.33 for pure water, and typically 1.52 for immersion oil), and theta is the half-angle of the maximum cone of light that can enter or exit the lens."

As for "the medium in which the lens is working", note that ASML used water immersion with deep ultraviolet (193 nanometer light and higher) to achieve an NA greater than 1. This hasn't been done for extreme ultraviolet (13.5 nanometer light).

The increase in numerical aperture that ASML has recently accomplished, and that Intel is announcing they are using, is an increase from 0.33 to 0.55. (Numerical aperture is a dimensionless number.)

How did ASML achieve this increase? Their page on "5 things you should know about High NA EUV lithography" (link below) gives a clue. One of the 5 things is, "larger, anamorphic optics for sharper imaging".

The page refers to "EXE" and "NXE". These refer to ASML's own equipment. NXE systems have a numerical aperture of 0.33, but with the EXE systems, ASML has increased it to 0.55.

"Implementing this increase in NA meant using bigger mirrors. But the bigger mirrors increase the angle at which light hit the reticle, which has the pattern to be printed."

You're probably not familiar with the term "reticle". Here the meaning is different from normal optics. In normal optics, it refers to a scale that you might see in a microscope scope. But here, it has to do with the fact that chips are no longer manufactured with the entire pattern for the whole chip all in one shot. Instead, a pattern for only a small portion of the wafer is used at a time, and then stepper motors move the wafer and the process is repeated. This small portion of the pattern that is used at a time is called the "reticle".

"At the larger angle the reticle loses its reflectivity, so the pattern can't be transferred to the wafer. This issue could have been addressed by shrinking the pattern by 8x rather than the 4x used in NXE systems, but that would have required chipmakers to switch to larger reticles."

"Instead, the EXE uses an ingenious design: anamorphic optics. Rather than uniformly shrinking the pattern being printed, the system's mirrors demagnify it by 4x in one direction and 8x in the other. That solution reduced the angle at which the light hit the reticle and avoided the reflection issue. Importantly, it also minimized the new technology's impact on the semiconductor ecosystem by allowing chipmakers to continue using traditionally sized reticles."

Intel announces new roadmap at IFS Direct Connect 2024: New 14A node, Clearwater Forest taped-in, five nodes in four years remains on track

#solidstatelife #mooreslaw #semiconductors

waynerad@diasp.org

A petabit of data can be fit on an optical disk by storing information in 3D. They say that's 125,000 gigabytes on a single DVD-sized disk.

"Optical disks like DVDs and Blu-rays are cheap and durable but can't hold much data. Until now, optical disks store data in a single layer of information that's read using a laser. Well, you can kiss those puny disks goodbye thanks to a new technique that can read and write up to 100 layers of data in the space of just 54-nanometres, as described in a new paper published in the journal Nature."

How do they do that? The research paper is paywalled but the abstract says:

"We develop an optical recording medium based on a photoresist film doped with aggregation-induced emission dye, which can be optically stimulated by femtosecond laser beams. This film is highly transparent and uniform, and the aggregation-induced emission phenomenon provides the storage mechanism. It can also be inhibited by another deactivating beam, resulting in a recording spot with a super-resolution scale. This technology makes it possible to achieve exabit-level storage by stacking nanoscale disks into arrays, which is essential in big data centres with limited space."

Femtosecond lasers, eh? How much is this reader/writer going to cost? I have a feeling it won't be showing up at Micro Center any time soon. But might be good for Google, etc, to back up the massive amounts of data they have in their data centers.

Meet the Super DVD: Scientists develop massive 1 petabit optical disk

#solidstatelife #mooreslaw #storage

waynerad@diasp.org

"Subprime Intelligence". Edward Zitron makes the case that: "We are rapidly approaching the top of generative AI's S-curve, where after a period of rapid growth things begin to slow down dramatically".

"Even in OpenAI's own hand-picked Sora outputs you'll find weird little things that shatter the illusion, where a woman's legs awkwardly shuffle then somehow switch sides as she walks (30 seconds) or blobs of people merge into each other."

"Sora's outputs can mimic real-life objects in a genuinely chilling way, but its outputs -- like DALL-E, like ChatGPT -- are marred by the fact that these models do not actually know anything. They do not know how many arms a monkey has, as these models do not 'know' anything. Sora generates responses based on the data that it has been trained upon, which results in content that is reality-adjacent."

"Generative AI's greatest threat is that it is capable of creating a certain kind of bland, generic content very quickly and cheaply."

I don't know. On the one hand, we've seen rapid bursts of progress in other technologies, only to be followed by periods of diminishing returns, sometimes for a long time, before some breakthrough leads to the next rapid burst of advancement. On the other hand, the number of parameters in these is much smaller than the number of synapses in the brain, which might be an approximate point of comparison, so it seems plausible that continuing to make them bigger will in fact make them smarter and make the kind of complains you see in this article go away.

What do you all think? Are we experiencing a temporary burst of progress soon to be followed by a period of diminishing returns? Or should we expect ongoing progress indefinitely?

Subprime Intelligence

#solidstatelife #ai #genai #llms #computervision #mooreslaw #exponentialgrowth

waynerad@diasp.org

30 TB hard drives coming. Moore's Law keeps cranking. Thanks to something called "heat-assisted magnetic recording (HAMR) technology". It works by temporarily heating the disk material just in the spot that's being written, and this allows writing to smaller regions at higher densities. It's tricky to do but apparently Seagate has mastered the technique. Apparently this technology was announced last June but volume production hasn't fully kicked in yet.

Seagate unveils Mozaic 3+ HDD platform as HAMR readies for volume ramp

#solidstatelife #hardware #storage #hdd #mooreslaw

olddog@pluspora.com

Wow! Excellent! I think this is hugely important.

Image

IBM creates the world’s first 2 nm chip | Ars Technica

https://arstechnica.com/gadgets/2021/05/ibm-creates-the-worlds-first-2-nm-chip/

moore's law isn't dead yet —
IBM creates the world’s first 2 nm chip
IBM's new 2 nm process offers transistor density similar to TSMC's next-gen 3 nm.

Jim Salter - 5/7/2021, 4:42 AM

Thursday, IBM announced a breakthrough in integrated circuit design—the world's first 2 nanometer process. IBM says its new process can produce CPUs capable of either 45 percent higher performance, or 75 percent lower energy use than modern 7 nm designs.

If you've followed recent processor news, you're likely aware that Intel's current desktop processors are still laboring along at 14 nm, while the company struggles to complete a migration downward to 10 nm—and that its rivals are on much smaller processes, with the smallest production chips being Apple's new M1 processors at 5 nm. What's less clear is exactly what that means in the first place.

Originally, process size referred to the literal two-dimensional size of a transistor on the wafer itself—but modern 3D chip fabrication processes have made a hash of that. Foundries still refer to a process size in nanometers, but it's a "2D equivalent metric" only loosely coupled to reality, and its true meaning varies from one fabricator to the next.

To get a better idea of how IBM's new 2 nm process stacks up, we can take a look at transistor densities—with production process information sourced from Wikichip and information on IBM's process courtesy of Anandtech's Dr. Ian Cutress, who got IBM to translate "the size of a fingernail"—enough area to pack in 50 billion transistors using the new process into 150 square millimeters.
Manufacturer Example Process Size Peak Transistor Density (millions/sq mm)
Intel Cypress Cove (desktop) CPUs 14 nm 45
Intel Willow Cove (laptop) CPUs 10 nm 100
AMD (TSMC) Zen 3 CPUs 7 nm 91
Apple (TSMC) M1 CPUs 5 nm 171
Apple (TSMC) next-gen Apple CPUs, circa 2022 3 nm ~292 (estimated)
IBM May 6 prototype IC 2 nm 333

As you can see in the chart above, the simple "nanometer" metric varies pretty strenuously from one foundry to the next—in particular, Intel's processes sport a much higher transistor density than implied by the "process size" metric, with its 10 nm Willow Cove CPUs being roughly on par with 7 nm parts coming from TSMC's foundries. (TSMC builds processors for AMD, Apple, and other high-profile customers.)

Although IBM claims that the new process could "quadruple cell phone battery life, only requiring users to charge their devices every four days," it's still far too early to ascribe concrete power and performance characteristics to chips designed on the new process. Comparing transistor densities to existing processes also seems to take some of the wind from IBM's sails—comparing the new design to TSMC 7 nm is well and good, but TSMC's 5 nm process is already in production, and its 3 nm process—with a very similar transistor density—is on track for production status next year.

We don't yet have any announcements of real products in development on the new process. However, IBM currently has working partnerships with both Samsung and Intel, who might integrate this process into their own future production.

Listing image by IBM

#Electronics #Computing #IT #CellPhones #2nm #MooresLaw

dredmorbius@joindiaspora.com

By Moore's Law, computers have increased in speed roughly 1 million-fold in 50 years. Where are the gains?

I'd argue it's the users.

The Solow Paradox is an observation in economics, that "you can see the computer age everywhere but in the productivity statistics", named after Robert Solow, who coined it in 1987.

The question comes up in a news item that Chrome's update to its search/navigation "omnibox" reduced root DNS server loads by 40%, posted here by @Phil Stracchino.

Some see that as wasteful. It's an interesting trade between UI/UX simplicity and technical load.

If you think about the past ... dunno, 50 years ... of computer interface design, it's virtually all been a trade-off between those two factors. And the main gain of the million-fold increase in raw compute capacity since 1971 has been to a rough approximation, a million-fold increase in the number of computer users.

Reality is a bit more complex: there are numerous threshold effects of cost, capabilities, compute power, memory, storage, and networking capability, as well as "parasitic" but financially-significant roles such as advertising and surveillance, which pay many of the bills these days.

And there's some argument that the Solow Paradox merely measured a lag in response as new processes and businesses formed to take advantage of compute power, though ... well, I may have more to say on that. You can though see similar patterns with earlier periods of power- and transmission-related technologies in steam and electricity, where initial adaptations aped earlier equivalents, and it wasn't until factories and processes were restructured to take advantage of the specific benefits of the novel technologies that benefits emerged, this taking several decades.

In the case of computers, a significant aspect may well be the fact that end user computer skills vary tremendously and are overwhelming exceedingly poor, as Jacob Nielsen's commentary on an OECD large-scale multi-nation study. Over half the population, and over 2/3 in most surveyed industrialised countries, have poor, "below poor", or no computer skills at all.

And if you want them to make use of digitial technology, it's a heck of a lot easier to move the devices and tools to their level then them to those of the tools. Including by combining search and navigation inputs in a browser used by billions of souls. It's the tyranny of the minimum viable user.

If you want to know where compute power's gone, look at the users.

#MooresLaw #ComputePower #TyrannyOfTheMinimumViableUser