fivenewscrypto
Terkini Populer Kategori
Headline
Loading...

Technology

[Technology][recentbylabel]

Ads Auto

vendredi 21 novembre 2025

Attack, defend, pursue—the Space Force’s new naming scheme foretells new era

Attack, defend, pursue—the Space Force’s new naming scheme foretells new era

A little more than a century ago, the US Army Air Service came up with a scheme for naming the military’s multiplying fleet of airplanes.

The 1924 aircraft designation code produced memorable names like the B-17, A-26, B-29, and P-51—B for bomber, A for attack, and P for pursuit—during World War II. The military later changed the prefix for pursuit aircraft to F for fighter, leading to recognizable modern names like the F-15 and F-16.

Now, the newest branch of the military is carving its own path with a new document outlining how the Space Force, which can trace its lineage back to the Army Air Service, will name and designate its “weapon systems” on the ground and in orbit. Ars obtained a copy of the document, first written in 2023 and amended in 2024.

Read full article

Comments

Study: Kids’ drip paintings more like Pollock’s than those of adults

Study: Kids’ drip paintings more like Pollock’s than those of adults

Not everyone appreciates the artistry of Jackson Pollock’s famous drip paintings, with some dismissing them as something any child could create. While Pollock’s work is undeniably more sophisticated than that, it turns out that when one looks at splatter paintings made by adults and young children through a fractal lens and compares them to those of Pollock himself, the children’s work does bear a closer resemblance to Pollock’s than those of the adults. This might be due to the artist’s physiology, namely a certain clumsiness with regard to balance, according to a new paper published in the journal Frontiers in Physics.

Co-author Richard Taylor, a physicist at the University of Oregon, first found evidence of fractal patterns in Pollock’s seemingly random drip patterns in 2001. As previously reported, his original hypothesis drew considerable controversy, both from art historians and a few fellow physicists. In a 2006 paper published in Nature, Case University physicists Katherine Jones-Smith and Harsh Mathur claimed Taylor’s work was “seriously flawed” and “lacked the range of scales needed to be considered fractal.” (To prove the point, Jones-Smith created her own version of a fractal painting using Taylor’s criteria in about five minutes with Photoshop.)

Taylor was particularly criticized for his attempt to use fractal analysis as the basis for an authentication tool to distinguish genuine Pollocks from reproductions or forgeries. He concedes that much of that criticism was valid at the time. But as vindication, he points to a machine learning-based study in 2015 relying on fractal dimension and other factors that achieved a 93 percent accuracy rate distinguishing between genuine Pollocks and non-Pollocks. Taylor built on that work for a 2024 paper reporting 99 percent accuracy.

Read full article

Comments

“We’re in an LLM bubble,” Hugging Face CEO says—but not an AI one

“We’re in an LLM bubble,” Hugging Face CEO says—but not an AI one

There’s been a lot of talk of an AI bubble lately, especially regarding circular funding involving companies like OpenAI and Anthropic—but Clem Delangue, CEO of machine-learning resources hub Hugging Face, has made the case that the bubble is specific to large language models, which is just one application of AI.

“I think we’re in an LLM bubble, and I think the LLM bubble might be bursting next year,” he said at an Axios event this week, as quoted in a TechCrunch article. “But ‘LLM’ is just a subset of AI when it comes to applying AI to biology, chemistry, image, audio, [and] video. I think we’re at the beginning of it, and we’ll see much more in the next few years.”

At Ars, we’ve written at length in recent days about the fears around AI investment. But to Delangue’s point, almost all of those discussions are about companies whose chief product is large language models, or the data centers meant to drive those—specifically, those focused on general-purpose chatbots that are meant to be everything for everybody.

Read full article

Comments

Rocket Lab Electron among first artifacts installed in CA Science Center space gallery

Rocket Lab Electron among first artifacts installed in CA Science Center space gallery

It took the California Science Center more than three years to erect its new Samuel Oschin Air and Space Center, including stacking NASA’s space shuttle Endeavour for its launch pad-like display.

Now the big work begins.

“That’s completing the artifact installation and then installing the exhibits,” said Jeffrey Rudolph, president and CEO of the California Science Center in Los Angeles, in an interview. “Most of the exhibits are in fabrication in shops around the country and audio-visual production is underway. We’re full-on focused on exhibits now.”

Read full article

Comments

He got sued for sharing public YouTube videos; nightmare ended in settlement

He got sued for sharing public YouTube videos; nightmare ended in settlement

Nobody expects to get sued for re-posting a YouTube video on social media by using the “share” button, but librarian Ian Linkletter spent the past five years embroiled in a copyright fight after doing just that.

Now that a settlement has been reached, Linkletter told Ars why he thinks his 2020 tweets sharing public YouTube videos put a target on his back.

Linkletter’s legal nightmare started in 2020 after an education technology company, Proctorio, began monitoring student backlash on Reddit over its AI tool used to remotely scan rooms, identify students, and prevent cheating on exams. On Reddit, students echoed serious concerns raised by researchers, warning of privacy issues, racist and sexist biases, and barriers to students with disabilities.

Read full article

Comments

jeudi 20 novembre 2025

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data

Microsoft’s warning on Tuesday that an experimental AI agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained?

As reported Tuesday, Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

Hallucinations and prompt injections apply

The fanfare, however, came with a significant caveat. Microsoft recommended users enable Copilot Actions only “if you understand the security implications outlined.”

Read full article

Comments

How Louvre thieves exploited human psychology to avoid suspicion—and what it reveals about AI

How Louvre thieves exploited human psychology to avoid suspicion—and what it reveals about AI

On a sunny morning on October 19 2025, four men allegedly walked into the world’s most-visited museum and left, minutes later, with crown jewels worth 88 million euros ($101 million). The theft from Paris’ Louvre Museum—one of the world’s most surveilled cultural institutions—took just under eight minutes.

Visitors kept browsing. Security didn’t react (until alarms were triggered). The men disappeared into the city’s traffic before anyone realized what had happened.

Investigators later revealed that the thieves wore hi-vis vests, disguising themselves as construction workers. They arrived with a furniture lift, a common sight in Paris’s narrow streets, and used it to reach a balcony overlooking the Seine. Dressed as workers, they looked as if they belonged.

This strategy worked because we don’t see the world objectively. We see it through categories—through what we expect to see. The thieves understood the social categories that we perceive as “normal” and exploited them to avoid suspicion. Many artificial intelligence (AI) systems work in the same way and are vulnerable to the same kinds of mistakes as a result.

The sociologist Erving Goffman would describe what happened at the Louvre using his concept of the presentation of self: people “perform” social roles by adopting the cues others expect. Here, the performance of normality became the perfect camouflage.

The sociology of sight

Humans carry out mental categorization all the time to make sense of people and places. When something fits the category of “ordinary,” it slips from notice.

AI systems used for tasks such as facial recognition and detecting suspicious activity in a public area operate in a similar way. For humans, categorization is cultural. For AI, it is mathematical.

But both systems rely on learned patterns rather than objective reality. Because AI learns from data about who looks “normal” and who looks “suspicious,” it absorbs the categories embedded in its training data. And this makes it susceptible to bias.

The Louvre robbers weren’t seen as dangerous because they fit a trusted category. In AI, the same process can have the opposite effect: people who don’t fit the statistical norm become more visible and over-scrutinized.

It can mean a facial recognition system disproportionately flags certain racial or gendered groups as potential threats while letting others pass unnoticed.

A sociological lens helps us see that these aren’t separate issues. AI doesn’t invent its categories; it learns ours. When a computer vision system is trained on security footage where “normal” is defined by particular bodies, clothing, or behavior, it reproduces those assumptions.

Just as the museum’s guards looked past the thieves because they appeared to belong, AI can look past certain patterns while overreacting to others.

Categorization, whether human or algorithmic, is a double-edged sword. It helps us process information quickly, but it also encodes our cultural assumptions. Both people and machines rely on pattern recognition, which is an efficient but imperfect strategy.

A sociological view of AI treats algorithms as mirrors: They reflect back our social categories and hierarchies. In the Louvre case, the mirror is turned toward us. The robbers succeeded not because they were invisible, but because they were seen through the lens of normality. In AI terms, they passed the classification test.

From museum halls to machine learning

This link between perception and categorization reveals something important about our increasingly algorithmic world. Whether it’s a guard deciding who looks suspicious or an AI deciding who looks like a “shoplifter,” the underlying process is the same: assigning people to categories based on cues that feel objective but are culturally learned.

When an AI system is described as “biased,” this often means that it reflects those social categories too faithfully. The Louvre heist reminds us that these categories don’t just shape our attitudes, they shape what gets noticed at all.

After the theft, France’s culture minister promised new cameras and tighter security. But no matter how advanced those systems become, they will still rely on categorization. Someone, or something, must decide what counts as “suspicious behavior.” If that decision rests on assumptions, the same blind spots will persist.

The Louvre robbery will be remembered as one of Europe’s most spectacular museum thefts. The thieves succeeded because they mastered the sociology of appearance: They understood the categories of normality and used them as tools.

And in doing so, they showed how both people and machines can mistake conformity for safety. Their success in broad daylight wasn’t only a triumph of planning. It was a triumph of categorical thinking, the same logic that underlies both human perception and artificial intelligence.

The lesson is clear: Before we teach machines to see better, we must first learn to question how we see.

Vincent Charles, Reader in AI for Business and Management Science, Queen’s University Belfast, and Tatiana Gherman, Associate Professor of AI for Business and Strategy, University of Northampton.  This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read full article

Comments

Ads Auto


Smartphones

[Smartphones][recentbylabel]

Ads Auto

Photography

[Photography][recentbylabel2]

Economy

[Economy][recentbylabel2]