Tech
Enlarge / Indiana Chief Justice Loretta Rush.Indiana Supreme Court

An Indiana man may beat a drug prosecution after the state's highest court threw out a search warrant against him late last week. The search warrant was based on the idea that the man had "stolen" a GPS tracking device belonging to the government. But Indiana's Supreme Court concluded that he'd done no such thing—and the cops should have known it.

Last November, we wrote about the case of Derek Heuring, an Indiana man the Warrick County Sheriff's Office suspected of selling meth. Authorities got a warrant to put a GPS tracker on Heuring's car, getting a stream of data on his location for six days. But then the data stopped.

Officers suspected Heuring had discovered and removed the tracking device. After waiting for a few more days, they got a warrant to search his home and a barn belonging to his father. They argued the disappearance of the tracking device was evidence that Heuring had stolen it.

During their search, police found the tracking device and some methamphetamine. They charged Heuring with drug-related crimes as well as theft of the GPS device.

But at trial, Heuring's lawyers argued that the warrant to search the home and barn had been illegal. An application for a search warrant must provide probable cause to believe a crime was committed. But removing a small, unmarked object from your personal vehicle is no crime at all, Heuring's lawyers argued. Heuring had no way of knowing what the device was or who it belonged to—and certainly no obligation to leave the device on his vehicle.

An Indiana appeals court ruled against Heuring last year. But Indiana's Supreme Court seemed more sympathetic to Heuring's case during oral arguments last November.

"I'm really struggling with how is that theft," said Justice Steven David during November's oral arguments.

“We find it reckless”

Last Thursday, Indiana's highest court made it official, ruling that the search warrant that allowed police to recover Heuring's meth was illegal. The police had no more than a hunch that Heuring had removed the device, the court said, and that wasn't enough to get a search warrant.

Even if the police could have proved that Heuring had removed the device, that wouldn't prove he stole it, tRead More – Source

Tech
  • Diamond City Fenway Park in Boston, as seen in Apple Maps' Look Around feature. Samuel Axon
  • The US Capitol Building in Look Around.
  • The Philadelphia Museum of Art in Look Around. Samuel Axon
  • Boston Public Garden in Look Around. Samuel Axon
  • The White House in Look Around—or about as close as you can get, anyway. Samuel Axon
  • Philly's Reading Terminal Market in Look Around.
  • Harvard Square in Look Around. Samuel Axon
  • DC's Washington Monument in Look Around. Samuel Axon

Apple Maps has been slowly expanding regional coverage for its Google Street View-like Look Around feature, and now MacRumors forum members have spotted rollouts for the feature in the US cities of Philadelphia, Boston, and Washington DC.

Look Around was added as a feature in iOS 13 last September, but it launched with coverage only in or near San Francisco. Like Google Street View, the feature allows users to zoom in to street-level photography of most streets in an urban area. Apple displays Yelp listings and other data on real-world buildings and monuments in the viewport when Look Around is displayed in full screen.

Generally, we have observed that the resolution and quality of the photography is better than what we've usually seen in Google's version, and Apple applies some slick anRead More – Source

Tech
Enlarge / World Health Organization (WHO) Director-General Tedros Adhanom Ghebreyesus gives a press conference on the situation regarding the COVID-19 at Geneva's WHO headquarters on February 24, 2020. Getty | Fabrice Coffrini

As outbreaks of the new coronavirus flare up in several countries beyond China, experts at the World Health Organization on Monday tried to rein in fears and media speculation that the public health emergency will become a pandemic.

“I have spoken consistently about the need for facts, not fear,” WHO Director-General Tedros Adhanom Ghebreyesus said in a press briefing Monday. “Using the word pandemic now does not fit the facts, but it may certainly cause fear.”

As always, the director-general (who goes by Dr. Tedros) and his colleagues at WHO tried to shift the conversation away from speculation and worst-case scenarios. Instead, they want to focus on data and preparation. In doing so, though, Dr. Tedros noted that some of the latest figures in the epidemic are “deeply concerning.”

Since last week, officials have reported rapid increases in COVID-19 cases in several countries, namely South Korea, Iran, and Italy. As of Monday, February 24, South Korea has confirmed 763 cases and 7 deaths—a dramatic rise from the 30 cases and zero deaths it had tallied just a week ago.

The situation in Italy, likewise, went from 3 cases at the start of last week to 124 confirmed cases and two deaths Monday. Iran went from zero to 43 cases in the same period and has reported eight deaths.

The figures have led to many media reports over the weekend speculating on whether the new coronavirus outbreak is or would become a pandemic. For now, Dr. Tedros said, it is not.

“Our decision about whether to use the word pandemic to describe an epidemic is based on an ongoing assessment of the geographical spread of the virus, the severity of disease it causes and the impact it has on the whole of society,” he explained. “For the moment, we are not witnessing the uncontained global spread of this virus, and we are not witnessing large-scale severe disease or death.”

Assessing risk

Dr. Tedros summarized some of the latest data on cases and disease from China, noting that cases there are in decline and have been declining since February 2.

In Wuhan, where the outbreak began in December, the COVID-19 fatality rate appears to be between 2 percent and 4 percent. US experts have noted that this high fatality rate may partly reflect the fact that health systems in the city have been extremely overwhelmed by the outbreak and facilities have run short of medical supplies.

Outside of Wuhan, the COVID-19 fatality rate in China is approximately 0.7 percent, Dr. Tedros said. But many public health experts have suggested that even that figure may be higher than the actual fatality rate because many mild, nonfatal cases may have gone uncounted. If counted, they would dilute the death toll, leading to a lower fatality rate.

For people who have mild infections—which is over 80 percent of cases, according to Chinese data—recovery takes about two weeks. More severe infections can take three to six weeks until recovery.

Dr. Tedros also reported that the coronavirus itself does not appear to be mutating.

“The key message that should give all countries hope, courage, and confidence is that this virus can be contained,” Dr. Tedros said of the latest assessment from China.

“Does this virus have pandemic potential? Absolutely, it has. Are we there yet? From our assessment, not yet.”

Tomorrow, a team of experts from WHO and China will reveal more details on a technical repoRead More – Source

Tech

Online retailers in Italy have found an easy way to take advantage of widespread coronavirus panic: hawking sold-out products for exorbitant prices.

While pharmacies run out of sanitizing products and masks, the same items appeared online at inflated prices set by individual merchants.

A 250 ml. bottle of Amuchina — a popular hand sanitizer which normally costs €7.50 — was sold for €50 on eBay Monday. A one-liter bottle of the same product reached a record price of €799.

Overpriced common products were displayed as specifically conceived to protect against the virus, even if they are not.

Politicians and consumer groups promptly pointed the finger at online merchants and platforms.

“This is not free market [but] a shameful speculation that has to be stopped immediately,” reads a parliamentary question submitted Monday by Partito Democratico MP Marianna Madia. The center-left group is part of the governing coalition.

Codacons, a major Italian consumer organization, filed a complaint Monday with 104 public prosecutors offices across the country. It also notified the Italian antitrust watchdog, accusing vendors of fraud and unfair commercial practices.

Even if prices displayed on e-commerce websites such as Amazon or eBay are freely determined by merchants according to supply and demand, the organization believes that platforms should share the blame.

Amazon made clear in a statement that only sellers have the power to determine prices, and promised to delete pages which do not comply with internal rules. Consumer groups, however, see things differently.

Even Read More – Source

Tech
Enlarge / Katherine Johnson sits at her desk with a globe, or "Celestial Training Device." NASA

Katherine Johnson, a trailblazing mathematician best known for her contributions to NASA's human spaceflight program and who gained fame later in life due to the movie Hidden Figures, died Monday. She was 101 years old.

"At NASA, we will never forget her courage and leadership and the milestones we could not have reached without her," said NASA Administrator Jim Bridenstine. "We will continue building on her legacy and work tirelessly to increase opportunities for everyone who has something to contribute toward the ongoing work of raising the bar of human potential."

Born in rural West Virginia on August 26, 1918, Johnson showed an aptitude for mathematics early in life. “I counted everything," she said late in life of her formative years. "I counted the steps to the road, the steps up to church, the number of dishes and silverware I washed… anything that could be counted, I did."

When West Virginia decided to integrate its graduate schools in 1939, Johnson and two male students were selected as the first black students to be offered spots at the states flagship school, West Virginia University. Katherine left her teaching job and enrolled in the graduate math program.

In the wake of the Soviet Union's launch of the Sputnik spacecraft in 1957, Johnson supported some of the engineers who went on to found the Space Task Group, which morphed into NASA in 1958. At the new space agency, Johnson performed the trajectory analysis for Alan Shepard's flight in May 1961, the first time an American flew into space.

Most notably, in 1962, she performed the critical calculations that put John Glenn into a safe orbit during the first orbital mission of a US astronaut. NASA engineers had run the calculations on electric computers, but when someone was needed to validate the calculations, Glenn and the rest of the space agency turned to Johnson. “'If she says theyre good,” JohRead More – Source

Tech
Enlarge / Most open source projects are vastly more restrictive with their trademarks than their code. OpenBSD's Puffy, Linux's Tux, and the FSF's Meditating Gnu are among the few FOSS logos that can easily and legally be remixed and reused for simple illustrative purposes.OpenBSD, Free Software Foundation, Larry Ewing, Seattle Municipal Archives

Most people have at least heard of open source software by now—and even have a fairly good idea of what it is. Its own luminaries argue incessantly about what to call it—with camps arguing for everything from Free to Libre to Open Source and every possible combination of the above—but the one thing every expert agrees on is that it's not open source (or whatever) if it doesn't have a clearly attributed license.

You can't just publicly dump a bunch of source code without a license and say "whatever—it's there, anybody can get it." Due to the way copyright law works in most of the world, freely available code without an explicitly declared license is copyright by the author, all rights reserved. This means it's just plain unsafe to use unlicensed code, published or not—there's nothing stopping the author from coming after you and suing for royalties if you start using it.

The only way to actually make your code open source and freely available is to attach a license to it. Preferably, you want a comment with the name and version of a well-known license in the header of every file and a full copy of the license available in the root folder of your project, named LICENSE or LICENSE.TXT. This, of course, raises the question of which license to use—and why?

There are a few general types of licenses available, and we'll cover each in its own section, along with one or more prominent examples of this license type.

Default licensing—proprietary, all rights reserved

In most jurisdictions, any code or content is automatically copyrighted by the author, with all rights reserved, unless otherwise stated. While it's good form to declare the author and the copyright date in the header of any code or document, failing to do so doesn't mean the author's rights are void.

An author who makes content or code available on their own website, a Github repository, etc—either without a stated license or with an express declaration of copyright—maintains both usage and distribution rights for that code, even though it's trivially simple to view or download. If you execute that code on your own computer or computers, you're transgressing on the author's usage rights, and they may bring civil suit against you for violating their copyright, since they never granted you that right.

Similarly, if you copy that code and give it to a friend, post it on another website, sell it, or otherwise make it available anywhere beyond where the author originally posted it, you've transgressed upon the author's distribution rights, and they have standing to bring a civil suit against you.

Note that an author who maintains proprietary rights to a codebase may individually grant license to persons or organizations to use that code. Technically, you don't ever "buy" software, even when it's boxed up in a physical store. What you're actually purchasing is a license to use the software—which may or may not include physical media containing a copy of the code.

Home-grown licenses

The short version of our comment on home-grown licensing is simple: just don't do it.

There are enough well-understood, OSI-approved open source licenses in the world already that nearly any person or project should be able to find an appropriate one. Writing your own license instead means that potential users of your project, content, or code will have to do the same thing the author didn't want to—read and understand a new license from scratch.

The new license will not have been previously tested in court, which many (though not all) of the OSI-approved open source licenses have been. Even more importantly, your new license will not be widely understood.

When a person or company wants to use a project licensed under—for example—GPL v3, Apache 2.0, or CC0 (more on these licenses later), it's relatively easy to figure out whether the license in question grants enough rights, in the right ways, to be suited for that purpose. Asking a competent intellectual property lawyer for advice is cheap and easy, because that competent IP lawyer should already be familiar with these licenses, case-law involving them, and so forth.

By contrast, if your project is licensed "Joe's Open Source License v1.01" nobody knows what that means. Legal consultation for a project under that license will be much more expensive—and dicey—because an IP lawyer would need to evaluate the text of the license as an entirely new work, unproven and untested. The new license might have unclear text, unintentional conflicts between clauses, or be otherwise unenforceable due to legal issues its author did not understand.

Failure to choose an OSI-approved license can also invalidate a project from certain rights or grants. For example, both Google and IBM offer royalty-free usage of portions of their patent portfolio to open source projects—and your project, no matter how "free" you consider it, may not qualify with a home-grown license. (IBM specifically names OSI license approval as a grant condition.)

OSI-approved licenses

The Open Source Initiative maintains a list of approved open source licenses, which comply with the OSI's definition of "open source." In the OSI's own words, these licenses "allow software to be freely used, modified, and shared." There is a lot of overlap among these licenses, many of which probably never should have existed—see "home grown licenses," above—but at some point, each of them gained enough traction to go through the OSI license approval process.

We're going to break this list of licenses down into three categories and list some of the more notable examples of each. Most authors don't need to read and understand the OSI's entire list—there usually aren't enough differences between common and uncommon variants of a general license type to make it worth straying from the most commonly used and well-understood versions.

Strong copyleft licenses

A copyleft license is a license that grants the permission to freely use, modify, and redistribute the covered intellectual property—but only if the original license remains intact, both for the original project and for any modifications to the original project anyone might make. This type of license—sometimes dismissively or fearfully referred to as "viral"—is the one attached to such famous projects as the Linux kernel, the GNU C Compiler, and the WordPress content management system.

A copyleft license may be "strong" or "weak"—a strong copyleft license covers both the project itself and any code that links to any code within the covered project. A weak copyleft license only covers the original project itself and allows non-copyleft-licensed code—even proprietary code—to link to functions within the weak-copyleft-licensed project without violating its license.

Some of the more popular strong copyleft licenses include:

  • GPLv2—the GNU General Public License allows for free usage, modification, and distribution of covered code, but the original license must remain intact and covers both the original project and any modifications. No attribution or patent grants are required in the GPLv2, but the seventh section does prohibit redistribution of GPLv2 licensed code if patents or any other reason would render the redistributed code unusable to a recipient. The GPL also requires that anyone distributing compiled versions of a project make original source code available as well, either by providing the source along with the distributed object code, or by offering it upon request.
  • GPLv3—Version three of the GNU General Public License is for most intents and purposes similar to GPLv2. It handles patents differently, however—the GPLv2 forbade redistribution under the GPLv2 if doing so would potentially require royalty payments for patents covering the work. The GPLv3 goes a step further and explicitly grants free usage rights to any patents owned, then or in the future, by any contributor to the project. The GPLv3 also expressly grants recipients the right to break any DRM (Digital Rights Management) code contained within the covered project, preventing them from being charged with violations of the Digital Millennium Copyright Act or similar "tamper-proofing" laws.
  • AGPL—the Affero GNU General Public License is, effectively, the GPLv3 with one significant additional clause—in addition to offering GPL freedoms to those who receive copies of AGPL-licensed software, it offers those same freedoms to users who interact with the AGPL-licensed software over a network. This prevents an individual or company from making significant valuable modifications to a project intended for widespread network use and refusing to make those modifications freely available.

We're going to give a little more ink to the AGPL outside of our bulleted list above, because it's a little harder to explain its impact to someone who isn't already very familiar with copyleft. In order to better understand its impact, we'll look at one real AGPL licensed project and a fictitious scenario involving a large company that might wish to adopt it.

The Nextcloud Web-based file-sharing suite is an AGPL-licensed project. Because it's licensed under a GPL variant, any person or company can freely download, install, and use it, either for themselves or to offer services—including paid services—to others. Let's imagine a hypothetical company—we'll call the company PB LLC, and their product Plopbox—that decides to spin up a large commercial site offering paid access to managed, hosted Nextcloud instances.

In the course of making Plopbox scale to millions of users, PB LLC makes substantial modifications to the code. The modified code consumes far fewer server resources and also adds several features that Plopbox's users find valuable enough to distinguish Plopbox substantially from vanilla installations of Nextcloud. If Nextcloud—the open source project PB LLC consumed in order to create the Plopbox service—had been licensed under the standard GPL, those modifications could remain proprietary, and PB LLC would not be required to provide them to anyone.

This is because the standard GPL's restrictions only kick in on redistribution, and PB LLC did not redistribute its modified version of Nextcloud. Since PB LLC only installed Nextcloud on its own servers, it's not obligated to provide copies of Nextcloud—either the original or the modified versions—to anyone, either automatically or upon request.

However, Nextcloud is not licensed under either standard version of the GPL—it's licensed under the Affero GPL, and the Affero GPL grants all of the rights associated with the GPL to networked users of a covered project, not merely to recipients of distributed code. So PB LLC actually would be required to make their scalability and new-feature modifications available, in source code form (and object code form, if applicable) to anyone who had both used the project (eg, by opening a Plopbox account) and requested a copy.

Weak copyleft licenses

A weak copyleft license is essentially similar to a strong copyleft license, but it does not extend its "viral" protection across linkage boundaries. Modifications to the weak-copyleft library (or other project) itself must retain the original license, but any code outside that project—even fully proprietary code—may link directly to functions inside the weak copyleft-licensed project.

There are relatively few weak copyleft licenses. The most commonly encountered are:

  • LGPL—the Lesser GNU General Public License. Sometimes still referred to by its original name, GNU "Library" General Public License, since it's most commonly used in shared libraries. Compatible for use with GPL-licensed projects.
  • MPL 2.0—the Mozilla Public License. MPL 2.0 is compatible for use with GPL-licensed projects; prior versions were not.
  • CDDL v1.0—The Common Development and Distribution License, originally authored by Sun Microsystems. CDDL is famously considered incompatible with the GPL, although this incompatibility has not been tested in court.

The major difference between the LGPL and the MPL is attribution—in order to link to an LGPL project from a non-GPL-compliant project, you must "give prominent notice… that the Library is used in it (and) covered by this license." The MPL does not have any attribution requirements; you may redistribute MPL projects, and link to functions within an MPL project, without any need to announce that you're doing so.

The Mozilla Public License is also notable for offering "forward migration." The Mozilla Foundation, as license steward, may create updated versions of the MPL in the future, with unique version numbers. Should it do so, any user of a project licensed MPL 2.0 may choose to use it under the original MPL 2.0 or any later version of the license.

The CDDL similarly allows forward migration but defines the license steward as Sun Microsystems rather than the Mozilla Foundation. Unlike the LGPL and MPL 2.0, CDDL is generally considered incompatible—possibly deliberately—with the GPL. Some organizations have chosen to dynamically link CDDL and GPL licensed code anyway—most notably Canonical, makers of the Ubuntu Linux distribution, who announced their decision to do so by distributing a Linux port of the ZFS filesystem in early 2016.

We at Canonical have conducted a legal review, including discussion with the industrys leading software freedom legal counsel, of the licenses that apply to the Linux kernel and to ZFS.

And in doing so, we have concluded that we are acting within the rights granted and in compliance with their terms of both of those licenses. Others have independently achieved the same conclusion. Differing opinions exist, but please bear in mind that these are opinions.

One significant dissent to Canonical's position comes from the Software Freedom Conservancy, which states that linking CDDL and GPL code is necessarily a GPL violation. Although the SFC states this in no uncertain terms, it expresses "sympathy" to Canonical's goals, and its conclusion focuses on asking Oracle (the CDDL's license steward, as the current owners of Sun Microsystems) to resolve the issue.

Should Oracle make the original ZFS codebase available under a GPLv2 compatible license—including any of the compatible permissive licenses—this availability would, in turn, grandfather in the later OpenZFS project without need for laborious consultation of every individual contributor.

We do not recommend modern use of the CDDL license—it is neither generally useful as a permissive license due to its GPL incompatibility, nor is it likely to be useful as a "GPL poison pill" given the strong stance Canonical and others have taken in belief that legal challenges to the linkage of CDDL and GPLv2 code would fail in court.

Permissive licenses

Permissive licenses make very few restrictions in the usage, distribution, or modification of covered projects. As a result, one permissive license tends to be very similar to another.

The most common restriction in permissive licenses is attribution—in other words, these licenses generally require statements giving credit to the original project in any projects derived from them. (We cover permissive licenses that do not require attribution in the next section on public domain equivalent licenses.)

Notable permissive licenses include:

  • BSD four-clause license—the original 1990 Berkeley Software Distribution license allowed for free usage, modification, redistribution, and even relicensing of covered software. Four clauses provided the only limiting factors: any redistribution must include the copyright notice of the original project (clauses one and two), any advertising materials for the project or any derivative project must acknowledge the use of the source project (clause three), andRead More – Source
Tech

Digital Politics is a column about the global intersection of technology and the world of politics.

As Europe lays out its grand vision for a digital future, there is at least one area where the bloc remains unrivaled — creating obstacles for itself.

When the European Commission unveiled its proposals for competing with the United States and China on everything from artificial intelligence to the data economy, President Ursula von der Leyen made it clear the 27-country bloc would do things its own way.

“We want to find European solutions for the digital age,” she told reporters in Brussels.

Those solutions include creating vast pools of data from industry to spur innovation in machine learning and other next-generation technologies — all while upholding the blocs fundamental rights like data privacy, which set the European Union apart from the worlds other powers.

Such balanced rhetoric is appealing. Amid a global techlash, who wouldnt want greater control over digital services?

The bloc is already working from a weak position compared with its global competitors in terms of money invested into technology and expertise to turn capital into global champions.

The problem is the EUs promise to embed its values into all aspects of technology is likely to hamstring its efforts to compete in the Big Leagues of tech against the U.S. and China.

And theres one reason why: Rivals wont follow its lead.

Where Europe wants to put limits on how facial recognition can be used, China is quickly rolling out a countrywide network of smart-cameras equipped with machine learning algorithms that make George Orwells “1984” look like a kids nursery rhyme.

Brussels is adamant that firms operating anywhere from Ireland to Greece must comply with its tough privacy rules. But in the United States — where debate about federal privacy standards has stalled — giants such as Amazon face no equivalent restrictions, leaving them free to harvest Americans personal information in ways that could lead to new business, but that would be illegal in the EU.

Now, Europe wants to write rules for artificial intelligence by baking in these restrictions from the get-go.

President of the European Commission Ursula von der Leyen | Kenzo Tribouillard/AFP via Getty Images

But in so doing, it ensures that local companies will be competing with one hand tied behind their backs.

The bloc is already working from a weak position compared with its global competitors in terms of money invested into technology and expertise to turn capital into global champions. Piling on more restrictions is not likely to close the gap.

Europes sales pitch

For policymakers in Brussels, European values — including the fundamental right to privacy, a long track record of government intervention in markets and a growing skepticism of international tech companies — are a source of strength, not weakness.

In that, they are correct.

Over the last five years, the bloc has been at the center of almost all global regulatory fights to check the powers of digital giants.

The EU has put its fingerprints on landmark antitrust fines, digital taxes and online privacy upgrades that are the envy of much of the Western world. It also has led the conversation on hot-button issues like how social media platforms should be regulated and what sort of ethical principles should guide the development of AI.

As the bloc looks to cement its claim on the next generation of tech, the “rules first” approach is looking more like an obstacle without a reward.

South Korea, Brazil and Japan have all followed Europes lead in developing similar privacy standards. And even parts of the U.S., like California and Washington state, have jumped on board the EU regulatory train.

But as the bloc looks to cement its claim on the next generation of tech, the “rules first” approach is looking more like an obstacle without a reward.

Take the flagship proposal from Thierry Breton, the EUs internal market commissioner: a European strategy for data.

Over 35 pages of often opaque prose, the French politician and his team lay out a vision based on widespread sharing of EU data, the creation of locally owned cloud computing infrastructure to compete with Big Tech and new rules to protect citizens rights.

They also describe giving companies the right to access this digital information to spur new advances in machine learning, medicine and other high-tech industries.

“The Commissions vision stems from European values and fundamental rights and the conviction that the human being is, and should remain, at the center,” the proposal proclaims.

Bretons grand plan

But its easy to spot holes in the plan.

EU Commissioner for Internal Market Thierry Breton | Kenzo Tribouillard/AFP via Getty Images

While much of the data shared will be so-called non-personal data — or anonymized digital information that can be used legally for industrialized number crunching — the EU also wants to tap more sensitive personal information such as, say, patient medical records or, eventually, autonomous driving statistics.

These areas fall under Europes strict data-protecRead More – Source

Tech

For sale: Exploding power banks, toxic toys and faulty smoke alarms.

These products — all banned in the European Union — can be purchased with a click of a button from large online marketplaces such as Amazon, AliExpress, eBay and Wish, according to a study carried out by consumer organizations in Belgium, Italy, the Netherlands, Denmark, Germany and the U.K.

Of the 250 products bought from popular online shops, including electronics, toys and cosmetics, two-thirds failed the EUs product-safety rules.

Some products, such as toys with loose parts or childrens hoodies with long cords that pose a choking hazard, were visibly unsafe. Others had up to 200 times more dangerous chemicals than allowed, while smoke and carbon monoxide alarms failed to detect deadly amounts of gas or smoke.

Three out of 4 tested USB chargers, travel adaptors and power banks failed electrical safety tests. Most of them were cheap and unbranded products.

Under current laws, platforms are required to removed listings of unsafe items once flagged “expeditiously.” They are under no obligation to search for faulty or dangerous products proactively. Consumer groups have for years called for platforms to take more responsibility for the products sold on their marketplaces through third-party sellers.

British consumer group Which? first found unsafe child car safety seats sold on Amazon in 2014. Amazon promptly took down the product listings. Then the car seats cropped again in spot checks in 2017 and 2019. In February, a BBC investigation found similar car seats on the platform yet again.

Regulators are finally starting to listen.

The current European consumer agenda from 2012 is woefully outdated when it comes to online shopping. The European Commissioner for Justice and Consumers Didier Reynders announced he would update the EUs consumer agenda later this year.

In 2018, U.S. companies Amazon and eBay, Chinas Alibaba and Japans Rakuten signed a voluntary pledge with the European Commission to guarantee the safety of non-food consumer products offered by third-party sellers. Polish online marketplace Allegro and French Cdiscount joined the group in 2020.

The platforms promise to remove the listings of flagged dangerous items within two days. They also vow to act against repeat offenders offering dangerous products and take measures to prevent the reappearance of dangerous listings.

Amazon, for example, has said it uses automated algorithms to review product pages and customer reviews.

Consumer groups say these voluntary measures are nowhere near enough, and platforms are not holding Read More – Source

Tech
Enlarge / Wok tossing has long been suspected of causing the high shoulder injury rate among Chinese chefs.Hunting Ko and David Hu/Georgia Tech

Fried rice is a classic dish in pretty much every Chinese restaurant, and the strenuous process of tossing the rice in a wok over high heat is key to producing the perfect final product. There's always chemistry involved in cooking, but there's also a fair amount of physics. Scientists at the Georgia Institute of Technology have devised a model for the kinematics of wok-tossing to explain how it produces fried rice that is nicely browned, but not burnt. They described their work in a recent paper published in the Journal of the Royal Society: Interface.

This work hails from David Hu's lab at Georgia Tech, known for investigating such diverse phenomena as the collective behavior of fire ants, water striders, snakes, various climbing insects, mosquitos, the unique properties of cat tongues, and animal bodily functions like urination and defecation—including a 2019 Ig Nobel Prize winning study on why wombats produce cubed poo. Hu and his graduate student, Hungtang Ko—also a co-author on a 2019 paper on the physics of how fire ants band together to build rafts—discovered they shared a common interest in the physics of cooking, particularly Chinese stir-fry.

Hu and Ko chose to focus their investigation on fried rice (or "scattered golden rice"), a classic dish dating back some 1500 years. According to the authors, tossing the ingredients in the wok while stir-frying ensures that the dish is browned, but not burned. Something about this cooking process creates the so-called "Maillard reaction": the chemical interaction of amino acids and carbohydrates subjected to high heat that is responsible for the browning of meats, for instance.

But woks are heavy, and the constant tossing can take its toll on Chinese chefs, some 64 percent of whom report chronic shoulder pain, among other ailments. Hu and Ko thought that a better understanding of the underlying kinematics of the process might one day lead to fewer wok-related injuries for chefs.

In the summers of 2018 and 2019, Ko and Hu filmed five chefs from stir-fry restaurants in Taiwan and China, cooking fried rice, and then extracted frequency data from that footage. (They had to explain to patrons that the recording was for science, and that they were not making a television show.) It typically takes about two minutes to prepare the dish, including sporadic wok-tossing—some 276 tossing cycles in all, each lasting about one-third of a second.

  • Image sequence showing the wok-tossing process, modeled as a two-link pendulum. Hungtang Ko and David Hu/Georgia Tech
  • Schematic diagram showing mathematical model for wok-tossing. Hungtang Ko and David Hu/Georgia Tech
  • Graph showing the metrics for evaluating wok tosses. Hungtang Ko and David Hu/Georgia Tech

Ko and Hu presented preliminary results of their experiments at a 2018 meeting of the American Physical Society's Division of Fluid Dynamics, publishing the complete analysis in this latest paper. They were able to model the wok's motion with just two variables, akin to a two-link pendulum, since chefs typically don't lift the wok off the stove, maintaining "a single sliding point of contact," they wrote. Their model predicted the trajectory of the rice based on projectile motion, using three metrics: the proportion of the rice being tossed, how high it was tossed, and its angular displacement.

The authors found two distinct stages of wok-tossing: pushing the wok forward and rotating it clockwise to catch rice as it falls; and pulling the woRead More – Source

Tech
Enlarge / The top floor of our test house is relatively straightforward—although like many houses, it suffers from terrible router placement nowhere near its center.Jim Salter

Here at Ars, we've spent a lot of time covering how Wi-Fi works, which kits perform the best, and how upcoming standards will affect you. Today, we're going to go a little more basic: we're going to teach you how to figure out how many Wi-Fi access points (APs) you need, and where to put them.

These rules apply whether we're talking about a single Wi-Fi router, a mesh kit like Eero, Plume, or Orbi, or a set of wire-backhauled access points like Ubiquiti's UAP-AC line or TP-Link's EAPs. Unfortunately, these "rules" are necessarily closer to "guidelines" as there are a lot of variables it's impossible to fully account for from an armchair a few thousand miles away. But if you become familiar with these rules, you should at least walk away with a better practical understanding of what to expect—and not expect—from your Wi-Fi gear and how to get the most out of it.

Before we get started

Let's go over one bit of RF theory (radio-frequency) before we get started on our ten rules—some of them will make much better sense if you understand how RF signal strength is measured and how it attenuates over distance and through obstacles.

Note: some RF engineers recommend -65dBM as the lowest signal level for maximum performance.
Enlarge / Note: some RF engineers recommend -65dBM as the lowest signal level for maximum performance.Jim Salter

The above graph gives us some simple free space loss curves for Wi-Fi frequencies. The most important thing to understand here is what the units actually mean: dBM convert directly to milliwatts, but on a logarithmic base ten scale. For each 10dBM drop, the actual signal strength in milliwatts drops by a factor of ten. -10dBM is 0.1mW, -20dBM is 0.01mW, and so forth.

The logarithmic scale makes it possible to measure signal loss additively, rather than multiplicably. Each doubling of distance drops the signal by 6dBM, as we can clearly see when we look at the bold red 2.4GHz curve: at 1m distance, the signal is -40dBM; at 2m, it's -46dBM, and at 4m it's down to -52dBM.

Walls and other obstructions—including but not limited to human bodies, cabinets and furniture, and appliances—will attenuate the signal further. A good rule of thumb is -3dBM for each additional wall or other significant obstruction, which we'll talk more about later. You can see additional curves plotted above in finer lines for the same distances including one or two additional walls (or other obstacles).

While you should ideally have signal levels no lower than -67dBM, you shouldn't fret about trying to get them much higher than that—typically, there's no real performance difference between a blazing-hot -40dBM and a considerably-cooler -65dBM, as far away from one another on a chart as they may seem. There's a lot more going on with Wi-Fi than just raw signal strength; as long as you exceed that minimum, it doesn't really matter how much you exceed it by.

In fact, too hot of a signal can be as much of a problem as too cold—many a forum user has complained for pages about low speed test results, until finally some wise head asks "did you put your device right next to the access point? Move it a meter or two away, and try again." Sure enough, the "problem" resolves itself.

Rule 1: No more than two rooms and two walls

Our first rule for access point placement is no more than two rooms and two interior walls between access points and devices, if possible. This is a pretty fudge-y rule, because different rooms are shaped and sized differently, and different houses have different wall structures—but it's a good starting point, and it will serve you well in typically-sized houses and apartments with standard, reasonably modern sheet rock interior wall construction.

"Typically-sized," at least in most of the USA, means bedrooms about three or four meters per side and larger living areas up to five or six meters per side. If we take nine meters as the average linear distance covering "two rooms" in a straight line, and add in two interior walls at -3dBM apiece, our RF loss curve shows us that 2.4GHz signals are doing fantastic at -65dBM. 5GHz, not so much—if we need a full nine meters and two full walls, we're down to -72dBM at 5GHz. This is certainly enough to get a connection, but it's not great. In real life, a device at -72dBM on 5GHz will likely see around the same raw throughput as one at -65dBM on 2.4GHz—but the technically slower 2.4GHz connection will tend to be more reliable and exhibit consistently lower latency.

Of course, this all assumes that distance and attenuation are the only problems we face. Rural users—and suburban users with large yards—will likely have already noticed this difference and internalized the rule-of-thumb "2.4GHz is great, but man, 5GHz sucks." Urban users—or suburban folks in housing developments with postage-stamp yards—tend to have a different experience entirely, which we'll cover in Rule 2.

Listing image by Jim Salter

When Ars approaches mesh networking, we come prepared.(L to R: Google WiFi, Plume pods, and AmpliFi pods)
Enlarge / When Ars approaches mesh networking, we come prepared.(L to R: Google WiFi, Plume pods, and AmpliFi pods)Jim Salter

Rule 2: Too much transmit power is a bug

The great thing about 2.4GHz Wi-Fi is the long range and effective penetration. The bad thing about 2.4GHz Wi-Fi is… the long range and effective penetration.

If two Wi-Fi devices within "earshot" of one another transmit on the same frequency at the same time, they accomplish nothing: the devices they were transmitting to have no way of unscrambling the signal and figuring out which bits were meant for them. Contrary to popular belief, this has nothing to do with whether a device is on your network or not—Wi-Fi network name and even password have no bearing here.

In order to (mostly) avoid this problem, any Wi-Fi device has to listen before transmitting—and if any other device is currently transmitting on the same frequency range, yours has to shut up and wait for it to finish. This still doesn't entirely alleviate the problem; if two devices both decide to transmit simultaneously, they'll "collide"—and each has to pick a random amount of time to back off and wait before trying to transmit again. The device that picks the lower random number gets to go first—unless they both picked the same random number, or some other device notices the clean air and decides to transmit before either of them.

This is called "congestion," and for most modern Wi-Fi users, it's at least as big a problem as attenuation. The more devices you have, the more congested your network is. And if they're using the same Wi-Fi channel, the more devices your neighbors have, the more congested both of your networks are—each of your devices can still congest with one another, and still have to respect airtime rules.

If your own router or access points support it, turning your transmission strength down can actually improve performance and roaming significantly—especially if you've got a mesh kit or other multiple-AP setup. 5GHz typically doesn't need to be detuned this way, since that spectrum already attenuates pretty rapidly—but it can work wonders for 2.4GHz.

A final note for those tempted to try "long-range" access points: a long-range AP can certainly pump its own signal hotter than a typical AP, and blast that signal a greater distance. But what it can't do is make your phone or laptop boost its signal to match. With this kind of imbalanced connection scenario, individual pieces of a website might load rapidly—but the whole experience feels "glitchy," because your phone or laptop struggles to upload the tens or hundreds of individual HTTP/S requests necessary to load each single webpage in the first place.

Rule 3: Use spectrum wisely

In Rule 2, we covered the fact that any device on the same channel competes with your devices for airtime, whether on your network or not. Most people won't have good enough relationships with their neighbors to convince them to turn their transmission strength down—if their router even supports that feature—but you can, hopefully, figure out what channels neighboring networks use and avoid them.

This is usually not going to be an issue with 5GHz, but for 2.4GHz it can be a pretty big deal. For that reason, we recommend that most people avoid 2.4GHz as much as possible. Where you can't avoid it, though, use an app like inSSIDer to take a look at your RF environment every now and then, and try to avoid re-using the busiest spectrum as seen in your house.

This is, unfortunately, trickier than it looks—it doesn't necessarily matter how many SSIDs you can see on a given channel; what matters is how much actual airtime is in use, and you can't get that from either SSID count or raw signal strength in the visible SSIDs. InSSIDer lets you go a step further, and look at the actual airtime utilization on each channel.

This inSSIDer chart shows you how busy each visible Wi-Fi channel is. The entire 2.4GHz spectrum is pretty much eaten alive, here.
Enlarge / This inSSIDer chart shows you how busy each visible Wi-Fi channel is. The entire 2.4GHz spectrum is pretty much eaten alive, here.metageek

In the above inSSIDer chart, the whole 2.4GHz spectrum is pretty much useless. Don't get excited by those "empty" channels 2-5 and 7-10, by the way: 2.4GHz Wi-Fi gear defaults to 20MHz bandwidth, which means a network actually uses five channels (20MHz plus a half-channel margin on each side), not one. Networks on "Channel 1" actually extend from a hypothetical "Channel negative two" to Channel 3. Networks on Channel 6 really extend from Channel 4 through Channel 8, and networks set to Channel 11 actually occupy Channel 9 through Channel 13.

Counting the "shoulder," a 20MHz wide 2.4GHz spectrum "channel" actually occupies a little more than four actual 5MHz channels.
Enlarge / Counting the "shoulder," a 20MHz wide 2.4GHz spectrum "channel" actually occupies a little more than four actual 5MHz channels.Michael Gauthier

Congestion is a much smaller issue with 5GHz networks, because the much lower range and penetration means fewer devices to congest with. You'll frequently hear claims that there are also more 5GHz channels to work with, but in practice that bit isn't really true unless you're engineering Wi-Fi for an enterprise campus with no competing networks. Residential 5GHz Wi-Fi routers and access points are generally configured for either 40MHz or 80MHz bandwidth, which means there are effectively only two non-overlapping channels: the low band, consisting of 5MHz channels 36-64, and the high band, consisting of 5MHz channels 149-165.

Each 40MHz wide 5GHz network actually occupies a bit more than 8 real 5MHz channels. In this chart, each small "bump" represents four 5MHz channels.
Enlarge / Each 40MHz wide 5GHz network actually occupies a bit more than 8 real 5MHz channels. In this chart, each small "bump" represents four 5MHz channels.Per Mejdal Rasmussen

We fully expect to see a bunch of contention over this in the comments: technically, you can fit four 40MHz wide networks or two 80MHz wide networks on the lower 5GHz band. Practically, consumer gear tends to be extremely sloppy about using overlapping channels (eg, an 80MHz channel centered on 48 or 52), making it difficult or impossible to actually pull off that degree of efficient spectrum use in realistic residential settings.

There are also DFS (Dynamic Frequency Spectrum) channels in between the two standard US consumer bands, but those must be shared with devices such as commercial and military radar systems. Many consumer devices refuse to even attempt to use DFS channels. Even if you have a router or access point willing to use DFS spectrum, it must adhere to stringent requirements to avoid interfering with any detected radar systems. Users "in the middle of nowhere" may be able to use DFS frequencies to great effect—but those users are less likely to have congestion problems in the first place.

If you live near an airport, military base, or coastal docking facility, DFS spectrum is likely not going to be a good fit for you—and if you live outside the US, your exact spectrum availability (both DFS and non-DFS) will be somewhat different than what's pictured here, depending on your local government's regulations.

Rule 4: Central placement is best

The difference between "router at the end of the house" and "access point in the middle of the house" can be night-and-day.
Enlarge / The difference between "router at the end of the house" and "access point in the middle of the house" can be night-and-day.Jim Salter

Moving back to the "attenuation" side of things, the ideal place to put any Wi-Fi access point is in the center of the space it needs to cover. If you've got a living space that's 30 meters end-to-end, a router in the middle only needs to cover 15m on each side, whereas one on the far end (where ISP installers like to drop the coax or DSL line) would need to cover the full 30m.

This also applies in smaller spaces with more access points. Remember, Wi-Fi signals attenuate fast. Six meters—the full distance across a single, reasonably large living room—can be enough to attenuate a 5GHz signal below the optimal level, if you include a couple of obstacles such as furniture or human bodies along the way. Which leads us into our next rule…

Rule 5: Above head height, please

Ceiling mount is technically the best option—but if that's too much to ask, just sitting an AP on top of a tall bookshelf can work wonders.
Ceiling mount is technically the best option—but if that's too much to ask, just sitting an AP on top of a tall bookshelf can work wonders.Jim Salter

The higher you can mount your access points, the better. A single human body provides roughly as much signall attenuation as an interior wall—which is part of the reason you might notice Wi-Fi at your house getting frustratingly slower or flakier than usual when many friends are over for a party.

Mounting access points—or a single router—above head height means you can avoid the need to transmit through all those pesky, signal-attenuating meat sacks. It also avoids most large furniture and appliances such as couches, tables, stoves, and bookcases.

The absolute ideal mounting is in the dead center of the room, on the ceiling. But if you can't manage that, don't worry—on top of a tall bookshelf is nearly as good, particularly if you expect the access point in question to service both the room it's in, and the room on the other side of the wall its bookshelf or cabinet is placed against.

Rule 6: Cut distances in halves

Let's say you've got some devices that are too far away from the nearest access point to get a good connection. You're lucky enough to have purchased an expandable system—or you're setting up a new multiple-access point mesh kit, and still have one left—so where do you put it?

We've seen people dither over this, and wonder if they should put an extra access point closer to the first access point (which it has to get data from) or closer to the farthest devices (which it has to get data to). The answer, generally, is neither: you put the new AP dead in the middle between its nearest upstream AP, and the farthest clients you expect it to service.

The key here is that you're trying to conserve airtime, by having the best possible connection both between your far-away devices and the new AP, and between the new AP and the closest one to it upstream. Typically, you don't want to favor either side. However, don't forget Rule 1: two rooms, two walls. If you can't split the difference evenly between the farthest clients and the upstream AP without violating Rule 1, then just place it as far away as Rule 1 allows.

If this all seems too logical and straightforward, don't worry, there's another irritating "unless-if" to consider: some higher-end mesh kits, such as Netgear's Orbi RBK-50/RBK-53 or Plume's Superpods, have an extremely high-bandwidth 4×4 backhaul connection. Because this connection is much faster than the 2×2 or 3×3 connections client devices can utilize, it might be worth settling for lower signal quality between these units, with a degraded throughput that's still close to the best your client devices can manage.

If your mesh kit offers these very fast backhaul connections, and you absolutely cannot introduce any more APs to the mix, you might actually end up better off putting your last AP closer to the clients than to its upstream. But you'll need to experiment, and pay attention to your results.

Wi-Fi is fun, isn't it?

Rule 7: Route around obstacles

A tightly packed bookshelf is a significant RF obstacle,—worth a couple of walls in its own right—even when traversed perpendicularly. Penetrating its <em>length</em> is an absolute no-go.
Enlarge / A tightly packed bookshelf is a significant RF obstacle,—worth a couple of walls in its own right—even when traversed perpendicularly. Penetrating its length is an absolute no-go.Jim Salter

If you've got a really pesky space to work with, there may be areas that you just plain can't penetrate directly. For example, our test house has a concrete slab and several feet of packed earth obstructing the line-of-sight between the router closet and the downstairs floor. We've seen small businesses similarly frustrated at the inability to get Wi-Fi in the front of the office when the back was fine—which turned out to be due to a bookshelf full of engineering tomes lining a hallway, resulting in several linear meters of tightly-packed pulped wood attenuating the signal.

In each of these cases, the answer is to route around the obstruction with multiple access points. If you've got a Wi-Fi mesh kit, use it to your advantage to bounce signals around obstructions: get a clear line of sight to one side of your obstacle, and place an access point there which can relay from another angle that reaches behind the obstacle without needing to go directly through it.

With enough APs and careful enough placement, this may even tame early-1900s construction chickenwire-and-lath walls—we've seen people successfully place access points with clear lines of sight to one another through doorways and down halls, when penetrating the walls themselves is a joRead More – Source