Solutions to FakeNews: Linked-Data, Ontologies and Verifiable Claims

Linked-Data is a technology that produces machine and human readable information that is embedded in webpages.  Linked-Data powers many of the online experiences we use today, with a vast array of the web made available in these machine-readable formats.  The scope of linked-data use, even within the public sphere, is rather enormous.

Right now, most websites are using ‘linked data’ to ensure their news is being presented correctly on Facebook and via search, which is primarily supported via Schema.org .

The first problem is: that these ontologies do not support concepts such as genres.  This means in-turn that rather than ‘news’ becoming classified, as it would in any ordinary library or newspaper, the way in which ‘news’ is presented in a machine-readable format is particularly narrow and without (machine readable) context.

This means, in-turn, that the ability for content publishers to self-identify whether their article is an ‘advertorial’, ‘factual’, ‘satire’, ‘entertainment’ or other form of creative work – is not currently available in a machine-readable context.

This is kind of similar to the lack of ‘emotions’ provided by ‘social network silos’ to understand ‘sentiment analysis’ (EG: research links 1 and 2 through semantic tooling that offer means to profile environments and offer tooling for organisations.  Whilst Facebook offers the means to moderate particular words for its pages product this functionality is not currently available to humans (account holders).  

The mixture of a lack of available markup language for classifying posts, alongside the technical capabilities available to ‘persona ficta’ in a manner that is not similarly available to Humans, contributes towards the lack of ‘human centric’ functionality these platforms currently exhibit.

Bad Actors and Fact-Checking

In dealing with the second problem (In association to the use of Linked-Data), the means in which to verify claims is available through the application of ‘credentials’ or Verifiable Claims which in-turn relates to the Open Badges Spec.

These solutions allow an actor to gain verification from 3rd parties to provide their audience greater confidence that the claims represented by their articles.  Whether it is the means to “fact check” words, ensure images have not been ‘photoshopped’ or other ‘verification tasks’, one or more reputable sources could use verifiable claims to in-turn support end-user (reader / human) to gain confidence in what has been published.  Pragmatically, this can either be done locally or via the web through 3rd parties through the use of Linked-Data. For more information, get involved in W3C, you’ll find almost every significant organisation involved with Web Technology debating how to build standard to define the web we want.


General (re: Linked Data)

If you would like to review the machine-readable markup embedded in the web you enjoy today, one of the means to do so is via the Openlink Data Sniffer  An innovative concept for representing information was produced by Ted Nelson via his Xanadu Concept

Advancements in Computing Technology may make it difficult to trust media-sources in an environment that seemingly has difficulty understanding the human-centric foundations to our world; and, where the issues highlighted by many, including Eben Moglen, continue to grow.  Regardless of the technical means we have to analyse content (ie: redlink demo, it will always be important that we consider virtues such as kindness; and, it is important that those who represent us, in seeking solutions for vulnerable people us, put these sorts of issues on the agenda in which “fake news” has become yet another example (or symptom) of a much broader problem (imho).

A simple (additional) example of how a ‘graph database’ works as illustrated works such as this DbPedia related example (see: visualdataweb.org original example broken).  The production of “web 3.0” is remarkably different (see: startups to smartups) to former versions due to the volume of pre-existing web-users.  Whilst studies have shown that humans are not really that different, the challenge becomes how to fund the development costs of works that are not commercially focused (ie: in the interests of ‘persona ficta’) in the short-term, and to challenge issues such as ‘fake news’ or indeed also even, how to find a ‘Toilets’.  

Searching for a public Toilet (google search, 30 August 2016)

As ‘human centric’ needs continue to be unsupported via the web or indeed also, the emerging intelligent assistants working upon the same datasets; the problem technologists have broadly produced becomes that of a world produced for things that ‘sell’, without support for things we value. Whether it be support for how to help vulnerable people, receipts that don’t fade (ie: not thermal, but rather machine-readable), civic services, the means to use data to uphold ‘rule of law’, vote and participate in civics or the array of other examples in which we have the technology, but not the accessible application in which to apply the use of our technology to social/human needs. 

Indeed the works we produce and contribute on the web are for the most-part provided not simply freely, but at our own cost.   The things that are ‘human’ are less important and indeed, poorly supported.

This is the bigger issue.  We need to define means to distil the concept of ‘dignity’ on the web. Apps such as Facebook often have GPS history from our phones; does that mean the world should use that data to identify who broke into a house? If it is said you broke a speed limit in your vehicle when the GPS records show you were somewhere else, how should that help you?  

note; this article is based upon an earlier post re: solutions to fake news.