Benedict’s Newsletter: No. 285

Google is banning a Chinese app developer with links to Baidu from the Android App Store over spyware and ad fraud in its apps. It’s had over half a billion installs. Curation matters. Link

Apple is also apparently doing a purge of apps. Boring! But, some interesting things going on under the hood. In this case Apple is purging apps that use the ‘MDM’ device management system to manage screen time and children’s phone use. The problem is that MDM gives near-total control of the phone and can see all sorts of things like location and camera use, and Apple has always been clear it’s only allowed for internal company apps and software development, not consumer apps (hence, this is exactly why it blocked Facebook’s apps for 24 hours last year). So: Apple is getting steadily more aggressive on privacy and on blocking third parties from getting direct access to your device. However, the NY Times reported this as ‘Apple blocking apps that manage screen time’, with quotes from the affected companies, without mentioning the privacy aspects or MDM at all, and claiming that this was because these apps compete with Apple’s time management feature… which is free. This makes very little sense. There IS an argument that Apple should create more APIs for this so you don’t need MDM’s invasive access (and it probably will). Fundamentally, I don’t think arguing about whether the App Store should be curated is very interesting – we had this argument a decade ago and Apple was right (see the Google story above). But it IS interesting that what once was a geeky, inside-tech argument now gets to the mainstream press, with Elizabeth Warren using it as part of her campaign platform. Tech product arguments are now part of national politics. Link

We still don’t really know who owns Huawei. This hardly helps build confidence in the face of all the security concerns, though it does occur to me that even if the founder or Chinese pension funds etc really did own 100%, that might not actually change the Chinese government’s ability to influence it. Link

For the ‘AI bias’ files: the mayor of a US town who thinks (or has been told) that a machine learning ‘crime prediction’ system is ‘99% accurate’. The ‘AI ethics’ problem is not so much anything done at Google – it’s things done by third-tier vendors who don’t really understand the science, salespeople who don’t care, and unsophisticated buyers who hear ‘AI’ and imagine this is HAL 9000. Of course, this is exactly the same problem we had with databases, and punch cards: people being people. Link

Walmart is going to commission original TV. 🤷🏻‍♂️ Link

A good and balanced piece on how Facebook’s moderation team dealt with people sharing the New Zealand shooting video. This is not anything like as simple and easy as people would like to believe. Link

Interesting paper on how terrorists use the internet to spread ideas.Link

Fei-Fei Li (machine learning pioneer) and Yuval Noah Harari (pop historian du jour) on the impact of AI. Link

LA is using mobile phone data to work out what kinds of journeys people take, in order to redesign its bus system (I saw people doing this in Africa a decade ago). Interesting as a source of data but also interesting for the data itself – most journeys are actually pretty short (which feeds into the micro-mobility story). Link

From the ‘dirty tricks on Amazon marketplace file’: paying $10k a month to get to the top of listings. Link

 

Facebook to Rank News Sources by Quality to Battle Misinformation

Tech giant will rely on user surveys of trustworthiness to try to preserve objectivity

Facebook Inc. plans to start ranking news sources in its feed based on user evaluations of credibility, a major step in its effort to fight false and sensationalist information that will also push the company further into a role it has long sought to avoid—content referee.
The social-media giant will begin testing the effort next week by prioritizing news reports in its news feed from publications that users have rated in Facebook surveys as trustworthy, executives said Friday. The most “broadly trusted” publications—those trusted and recognized by a large cross-section of Facebook users—would get a boost in the news feed, while those that users rate low on trust would be penalized.

.. This shift will result in news accounting for about 4% of the posts that appears in users’ feeds world-wide, down from the current 5%
.. About 45% of U.S. adults get news from Facebook
.. Mr. Zuckerberg said the change—which will be tested leading up to the 2018 U.S. midterm elections—is necessary to address the role of social media in amplifying sensationalism, misinformation and polarization. “That’s why it’s important that News Feed promotes high quality news that helps build a sense of common ground,” he wrote in his post... He compared the approach with Facebook’s reliance on third-party fact-checkers to determine whether or not an article is completely fabricated.

.. On Friday, some publishers and media observers expressed concern about the ranking change, which, like other Facebook news-feed changes may have a significant and unpredictable impact on news publishers that rely on the site for traffic, including the Journal.

.. Facebook’s trust score would boost the news-feed presence of well-known and widely trusted publications even if users disagree with the content or aren’t avid readers.