AI's Latest and Greatest

EU parliament backs tighter rules on behavioural ads – TechCrunch

EU parliament backs tighter rules on behavioural ads – TechCrunch

The EU parliament has backed a call for tighter regulations on behavioral ads (aka microtargeting) in favor of less intrusive, contextual forms of advertising — urging Commission lawmakers to also assess further regulatory options, including looking at a phase-out leading to a full ban.

MEPs also want Internet users to be able to opt out of algorithmic content curation altogether.

The legislative initiative, introduced by the Legal Affairs committee, sets the parliament on a collision course with the business model of tech giants Facebook and Google.

Parliamentarians also backed a call for the Commission to look at options for setting up a European entity to monitor and impose fines to ensure compliance with rebooted digital rules — voicing support for a single, pan-EU Internet regulator to keep platforms in line.

The votes by the elected representatives of EU citizens are non-binding but send a clear signal to Commission lawmakers who are busy working on an update to existing ecommerce rules, via the forthcoming Digital Service Act (DSA) package — due to be introduced next month.

The DSA is intended to rework the regional rule book for digital services, including tackling controversial issues such as liability for user-generated content and online disinformation. And while only the Commission can propose laws, the DSA will need to gain the backing of the EU parliament (and the Council) if it is to go the legislative distance so the executive needs to take note of MEPs’ views.

The Commission also intends to introduce a second package, aimed at regulating ‘gatekeeper’ platforms by applying ex ante rules — called the Digital Markets Act.

A spokesman for the Commission confirmed it intends to introduce both packages before the end of the year, adding: “The proposals will create a safer digital space for all users where their fundamental rights are protected as well as a level playing field to allow innovative digital businesses to grow within the Single Market and compete globally.”

Battle over adtech

The mass surveillance of Internet users for ad targeting — a space that’s dominated by Google and Facebook — looks set to be a major battleground as Commission lawmakers draw up the DSA package.

Last month Facebook’s policy VP Nick Clegg, a former MEP himself, urged regional lawmakers to look favorably on a business model he couched as “personalized advertising” — arguing that behavioral ad targeting allows small businesses to level the playing field with better resourced rivals.

However the legality of the model remains under legal attack on multiple fronts in the EU.

Scores of complaints have been lodged with EU data protection agencies over the mass exploitation of Internet users’ data by the adtech industry since the General Data Protection Regulation (GDPR) begun being applied — with complaints raising questions over the lawfulness of the processing and the standard of consent claimed.

Just last week, a preliminary report by Belgium’s data watchdog found that a flagship tool for gathering Internet users’ consent to ad tracking that’s operated by the IAB Europe fails to meet the required GDPR standard.

The use of Internet users’ personal data in the high velocity information exchange at the core of programmatic’s advertising’s real-time-bidding (RTB) process is also being probed by Ireland’s DPC, following a series of complaints. The UK’s ICO has warned for well over a year of systemic problems with RTB too.

Meanwhile some of the oldest unresolved GDPR complaints pertain to so-called ‘forced consent’ by Facebook  — given GDPR’s requirement that for consent to be lawful it must be freely given. Yet Facebook does not offer any opt-out from behavioral targeting; the ‘choice’ it offers is to use its service or not use it.

Google has also faced complaints over this issue. And last year France’s CNIL fined it $57M for not providing sufficiently clear info to Android users over how it was processing their data. But the key question of whether consent is required for ad targeting remains under investigation by Ireland’s DPC almost 2.5 years after the original GDPR complaint was filed — meaning the clock is ticking on a decision.

And still there’s more: Facebook’s processing of EU users’ personal data in the US also faces huge legal uncertainty because of the clash between fundamental EU privacy rights and US surveillance law.

A major ruling (aka Schrems II) by Europe’s top court this summer has made it clear EU data protection agencies have an obligation to step in and suspend transfers of personal data to third countries when there’s a risk the information is not adequately protected. This led to Ireland’s DPC sending Facebook a preliminary order to suspend EU data transfers.

Facebook has used the Irish courts to get a stay on that while it seeks a judiciary review of the regulator’s process — but the overarching legal uncertainty remains. (Not least because the complainant, angry that data continues to flow, has also been granted a judicial review of the DPC’s handling of his original complaint.)

There has also been an uptick in EU class actions targeting privacy rights, as the GDPR provides a framework that litigation funders feel they can profit off of.

All this legal activity focused on EU citizens’ privacy and data rights puts pressure on Commission lawmakers not to be seen to row back standards as they shape the DSA package — with the parliament now firing its own warning shot calling for tighter restrictions on intrusive adtech.

It’s not the first such call from MEPs, either. This summer the parliament urged the Commission to “ban platforms from displaying micro-targeted advertisements and to increase transparency for users”. And while they’ve now stepped away from calling for an immediate outright ban, yesterday’s votes were preceded by more detailed discussion — as parliamentarians sought to debate in earnest with the aim of influencing what ends up in the DSA package.

Ahead of the committee votes, online ad standards body, the IAB Europe, also sought to exert influence — putting out a statement urging EU lawmakers not to increase the regulatory load on online content and services.

“A facile and indiscriminate condemnation of ‘tracking’ ignores the fact that local, generalist press whose investigative reporting holds power to account in a democratic society, cannot be funded with contextual ads alone, since these publishers do not have the resources to invest in lifestyle and other features that lend themselves to  contextual targeting,” it suggested.

“Instead of adding redundant or contradictory provisions to the current rules, IAB Europe urges EU policymakers and regulators to work with the industry and support existing legal compliance standards such as the IAB Europe Transparency & Consent Framework [TCF], that can even help regulators with enforcement. The DSA should rather tackle clear problems meriting attention in the online space,” it added in the statement last month.

However, as we reported last week, the IAB Europe’s TCF has been found not to comply with existing EU standards following an investigation by the Belgium DPA’s inspectorate service — suggesting the tool offers quite the opposite of ‘model’ GDPR compliance. (Although a final decision by the DPA is pending.)

The EU parliament’s Civil Liberties committee also put forward a non-legislative resolution yesterday, focused on fundamental rights — including support for privacy and data protection — that gained MEPs’ backing.

Its resolution asserted that microtargeting based on people’s vulnerabilities is problematic, as well as raising concerns over the tech’s role as a conduit in the spreading of hate speech and disinformation.

The committee got backing for a call for greater transparency on the monetisation policies of online platforms.

‘Know your business customer’

Other measures MEPs supported in the series of votes yesterday included a call to set up a binding ‘notice-and-action’ mechanism so Internet users can notify online intermediaries about potentially illegal online content or activities — with the possibility of redress via a national dispute settlement body.

While MEPs rejected the use of upload filters or any form of ex-ante content control for harmful or illegal content. — saying the final decision on whether content is legal or not should be taken by an independent judiciary, not by private undertakings.

They also backed dealing with harmful content, hate speech and disinformation via enhanced transparency obligations on platforms and by helping citizens acquire media and digital literacy so they’re better able to navigate such content.

A push by the parliament’s Internal Market Committee for a ‘Know Your Business Customer’ principle to be introduced — to combat the sale of illegal and unsafe products online — also gained MEPs’ backing, with parliamentarians supporting measures to make platforms and marketplaces do a better job of detecting and taking down false claims and tackling rogue traders.

Parliamentarians also supported the introduction of specific rules to prevent (not merely remedy) market failures caused by dominant platform players as a means of opening up markets to new entrants — signalling support for the Commission’s plan to introduce ex ante rules for ‘gatekeeper’ platforms.

Liability for ‘high risk’ AI

The parliament also backed a legislative initiative recommending rules for AI — urging Commission lawmakers to present a new legal framework outlining the ethical principles and legal obligations to be followed when developing, deploying and using artificial intelligence, robotics and related technologies in the EU including for software, algorithms and data.

The Commission has made it clear it’s working on such a framework, setting out a white paper this year — with a full proposal expected in 2021.

MEPs backed a requirement that ‘high-risk’ AI technologies, such as those with self-learning capacities, be designed to allow for human oversight at any time — and called for a future-oriented civil liability framework that would make those operating such tech strictly liable for any resulting damage.

The parliament agreed such rules should apply to physical or virtual AI activity that harms or damages life, health, physical integrity, property, or causes significant immaterial harm if it results in “verifiable economic loss”.

This report was updated with comment from the Commission and additional detail about the Digital Markets Act



Natasha Lomas

Source link

Similar Posts

WP2Social Auto Publish Powered By : XYZScripts.com