About W3C

History and Future of WWW

W3C was established by Sir Tim Berners-Lee using a set of particular ideological decisions that he made to produce the web we know.  He has communicated these decisions, including his future vision of the Web from a standards development, or rather “network theory” perspective at Annenberg Networks Network seminar.

On the 6th of September last year, Jeff Jaffe replied to a query I had in relation to philosophical concepts pertaining to the growth of W3C and how it is legally structured, with the following;

“W3C is not a legal entity.  W3C staff exists at four research universities (MIT, ERCIM, Keio, and Beihang) and legally it is a series of contracts that binds the universities and member organisation to work together on web standards.”  Through this mechanism, the w3c patent policy is administered in relation to the role W3C plays, in internet governance.

W3C supports the growth of open-standards through a methodology that elects a particular field of endeavour, resourcing that field to support cooperative / collaborative group works, where technical solutions are designed in the form of ‘web standards’, and eventually overtime, these standards become released in a form supported by the meaningful use of the patent policy ecosystem structure.  In more modern times, many of the projects taken-up by W3C start-off in a Community Group. Overtime the work produced in a Community Group or CG, may develop into a scope of work that’s developed further through an interest group, and finally a working group which in-turn results in the publication of standards implemented and provided ‘patent pool support’ by participating W3C Members.  W3C works are generally carried out online, with mailing lists, code and even strategy works available online.  Increasingly W3 also maintains online resources for Permanent Identifiers for the Web.

The role and meaningful utility of W3C has no known equal, within the sphere if its operationally realised purpose as has been developed since first established.  Through the leadership (and somewhat onerous role) of Tim Berners-Lee; an array of semi-structured frameworks, backed by the international patent pools that power the web making possible the means to ensure free use of the global standards made by it (interoperable / cooperatively with others) are extensible in ways that have no similar alternative in the field of ICT ‘knowledge engineering’ ‘information management’ systems tooling.  Whilst W3 works produce an array of constituent elements, it is principle custodianship of URI based layered technologies, built upon the underlying topological constituents of  internet.

In works that moreover, seek to extend the beneficial development of the world via ICT, a remarkable influence is instrumentally built through the use and adaption of the ‘semantic web’ concept.

Circa 2001 – “W3C technology road map. In the end, all W3C activities are in service to the top-level goal of reaching the semantic Web’s full potential. Arrows indicate “how” things are implemented; following them in reverse indicates “why” they exist (or should)”. Source: http://jmvidal.cse.sc.edu/library/w4012.pdf

Internet, has reached a level where access to it is debated as a human right, yet the complexities of these arguments extend to the meaningful use of it, beyond simply access to it.  The means through which this can now be better addressed makes instrumental use of works produced by W3C.

The link between internet and economics is now, inextricably linked; and it is worth noting, Internet Society as another key tenant in international internet governance worldwide; and whilst i note, that it is, arguably impossible, to produce anything without some form of inherent, ideological definitions woven into a specification (whether that be done via conscious, subconscious or as a result of unintended decision making processes) W3C and its fellow internet governance constituents are instrumental in the provision of intellectual property rights that entitle others to use ICT technologies without being required to pay financial royalties to companies for use of the meaningful human rights that relate and depend upon, the ability for persons communicate, as is a foundational required by all.   

For more information see the W3 Website and related WikiPedia Article.

Building an Economy based upon Knowledge Equity.

As illustrated by the OECD Knowledge Based Capital is a well-known form of asset that is used to make-up a significant proportion of the valuations provided to corporations.  Whilst the technology tools are now well-established as to support new models; the means through which the economically evaluate relationships between ‘the changing nature of work’ and the needs of those workers to participate in the ‘knowledge economy’ as natural persons; is known by technologists, to require an alternative means of information management that is designed to bring to market an operating model, that can be built upon.  

This in turn builds upon the idea that the role of ‘knowledge based equity’ is becoming increasingly important as the 4th industrial era impacts natural persons.

The solution put forth is based upon work I have been developing over many years to establish a technically viable means to build a ‘knowledge’ banking platform.  

Over the past 6 or so years, this journey led me to works on international standards required to evolve the technology solutions required to form a workable embodiment.

Long-term works, known as the ‘semantic web’ were found and it was through the extension of these works that the tooling was able to be evolved.  Whilst I have contributed in various ways, it is only due to the similarity of the ideas and commitment made to them by those such as Tim Berners-Lee, inventor of the world-wide-web, and through international collaboration that the technical means to build systems based on technology standards has been achieved.

Social Encryption: An Introduction

Human beings are far more complex than computers.

As social-organisms, making use of our real-world environment alongside other senses; we are very different to the things we’ve made, even though we’re now using them to make us ‘better’. 

ICT systems are already ‘smarter’ than us in a number of ways.  They’re able to process information in a way that no group of humans could ever achieve in a competitive timeframe.  

ICT networks and sensors continue to make attempts to mimic an array of behaviours humans learn, that humans have, yet humans generally develop these skills over long periods of time and computers that do many new things are still new.

Amongst the most rudimentary of core assumptions humans make is that we are able to rely upon our capacity to form a shared comprehension of things we consider to be constituents of our ‘reality’.

ICT is being used to both enhance and augment these capacities.

With sufficient evidence that any forum could be considered to share a level of consensus between those involved, Social Graph enabled ICT systems can be used to target specific groups and influence these outcomes. 

The practice of doing so, becomes much easier where there is only one ‘system’; however, the ramifications of ‘data quality’ suffers greatly as a consequence.

The means through which humans are ‘programmed’, is different to the way online systems are developed – to mimic and support the needs and ‘best interests’ of their operators.

This is influenced by the socioeconomic frameworks to which Institutions are bound by law to maintain; critical characteristics, that are different to what it means for all of us to be human.  

For example; computers can care less for children, other than as may be computed that they be provided additional stimulus; to warrant more ‘economic attention’.

By decentralising the web, the means to build social-encryption is considered to be amongst the most important underlying pillars required for socioeconomic growth.  The consequence of decentralising data-custodianship, access and discovery for federated queries enabling rendered outputs by dynamic agents; is considered, to be able to dramatically improve data-quality.

The concept of ‘social encryption’ is about making use of a multitude of networked, yet independently managed, computer systems in a manner that involves a large number of human beings.  

Q: How can we Verifiable the Claim that Tim Berners Lee appear at the 2012 Summer Olympics opening ceremony A: There are tens of thousands of people who were there, tens of millions watching live and now millions of links online, across the web, to improve the means available to verify whether or not this is true. The reality of these facts can only be changed by going through and causing every one of these agents to be made irrelevant, without being noticed for doing so.

The more people involved, who are using different, but linked systems; the more device/targets involved, making some forms of attack more difficult.

A common-property of all participants is time. Our computer systems don’t work very well unless tracking activity in relation to time.  By forming the means to produce records that are distributed across a multitude of systems, with a multitude of participants contributing towards a unified informatics environment, it becomes possible to make use of humans to produce ‘social encryption’.  

The impact of doing so is to improve the means to address threats otherwise posed by cryptography applied to singular systems, alongside issues posed by AI technologies; and the means to build a self-preserving, robust information management system that is more difficult to technologically attack.

Perhaps the most important consideration is how ICT can be used to protect against economic attacks.  The problem is not about whether the data exists or not; more often than not, the means to use data to evaluate any situation does indeed exist somewhere.  The problem is how we make use of data based on the standardised practices through which we may build infrastructure that performs the function of information management upon an idea of economic merit; build on old technology which now has other options.

The technology and tools required to make a change already power the largest organisations operating many of the worlds most important ICT systems today.

Human Consciousness

Prev 1 of 2 Next
Prev 1 of 2 Next

Decentralized Web Summit 2018

Prev 1 of 1 Next
Prev 1 of 1 Next

more information see https://decentralizedweb.net/

Introduction to AI

Prev 1 of 2 Next
Prev 1 of 2 Next

Introduction to Semantic Web

Prev 1 of 2 Next
Prev 1 of 2 Next

Image Recgonition Video Playlist

Prev 1 of 2 Next
Prev 1 of 2 Next

What is Linked Data?

Trust Factory 2017

Credentials and Payments by Manu Sporny

Credentials and Payments by Manu Sporny

Web of Things – an Introduction

(DOCUMENT STATUS – QUICK DRAFT)

Web of Things introduces the use of Semantic Web tooling, applied to the use of Internet of Things (IoT).

Some background can be found via the links below;

Links:

2009 - semantics https://www.w3.org/2009/03/xbrl/talks/intro2semweb-dsr.pdf
flyer - 2010 - https://webofthings.org/wot/2010/WoT_2010_cfp.pdf

2010 - 24 January 2010 -https://www.w3.org/2010/Talks/0123-dsr-sofsem.pdf

2010 - https://www.w3.org/2010/Talks/sofsem2010-raggett.pdf
September 8–12, 2013 -
http://www.ubicomp.org/ubicomp2013/adjunct/adjunct/p1487.pdf
2014 - web of thoughts - https://www.w3.org/2014/10/29-dsr-wot.pdf

April 2015 - https://hal.inria.fr/hal-01244735/document
July 2015 (thesis) https://tel.archives-ouvertes.fr/tel-01178286/document
2015 - https://ieeexplore.ieee.org/document/7111885/
2015 - https://www.w3.org/2015/05/wot-framework.pdf

March 2016 -
https://www.iab.org/wp-content/IAB-uploads/2016/03/Raggett-Kanti-Datta.pdf
|
https://www.ietf.org/proceedings/interim-2016-t2trg-02/slides/slides-interim-2016-t2trg-2-11.pdf

September 2016 - https://www.w3.org/2016/09/IoTW/wot-intro.pdf
2017
https://iotweek.blob.core.windows.net/slides2017/THEMATIC%20SESSIONS/Emerging%20IoT%20Researches%20and%20Technologies/Web%20of%20Things/D.%20Raggett-%20Countering%20Fragmentation.pdf


WotCity 2017 https://wotcity.com/WoTCity-WhitePaper.pdf
2018 - https://www.w3.org/2018/05/08-dsr-wot.pdf

RWW & some Solid history

Dated 2009, Tim Berners Lee wrote a document about ‘read write linked data‘ which is in-turn supplemented by the document he also authored about ‘socially aware cloud storage‘.  Together, these elements forge what is considered to be the world-wide-web standards based works, on a solution where people are able to store their own data online, in a manner that supports ‘linking’ between online data-sources across the web using the Semantic Web technology ecosystem.

An underlying storage standard has evolved to support the meaningful utility of these concepts, called Linked Data Platform.

Key Academic thesis produced by Andrei Sambra (2013), Joe Presbrey (2014) and Amy Guy (2017).     The evolution of what was first called RWW (note: W3 CG for RWW) was later ‘spun out’ as ‘solid‘ following a donation by mastercard.

Notably, one of the first applications produced to search RWW systems, to create an decentralised index of persons involved, was produced by Andrei and named ‘Webizen’; which is the source of inspiration for this sites name.

 

Inferencing (introduction)

Human inference (i.e. how humans draw conclusions) is traditionally studied within the field of cognitive psychologyartificial intelligence researchers develop automated inference systems to emulate human inference.  A constituent to the means through which semantic web technology supports ‘artificial intelligence’ functionality is by way of Semantic Inferencing.

Semantic Inferencing makes use of available structured data, formatted through the use of ontologies, to support the means through which assumed conclusions can be presented with a probabilistic degree of certainty.

The more data-points made available in connection to a specified form of query, the greater systems are able to improve the probability of query responses being correct.  Semantic inferencing is an important constituent to the broader eco-system of ‘semantic web tools’.

Semantic Web (An Intro)

Around the year 2000, a concept called the ‘semantic web‘ was brought together and has continued to evolve since.  By 2007, the use of semantic web technology had grown to a point where tim berners-lee wrote an article about the ‘Giant Global Graph‘ that had been forged through the use of it.

W3C technology road map. In the end, all W3C activities are in service to the top-level goal of reaching the semantic Web’s full potential. Arrows indicate “how” things are implemented; following them in reverse indicates “why” they exist (or should) IEEE INTERNET COMPUTING http://computer.org/internet/ JULY • AUGUST 2001 PG: 13

SOURCE: http://jmvidal.cse.sc.edu/library/w4012.pdf

Historically, it is noted that a constituent of the origins for these technologies were indeed born out of DARPA Agent Markup Language.

The ‘semantic web’ ecosystem of technologies has an array of different names and technical constituents which have developed overtime.

Critically, semantic web employs the use of RDF in an array of different serialisation formats.  Almost any form of data can be converted into RDF.

Once data is stored in an RDF format,  it can therefore be employed by systems that provide the means to query data structured in this format. The means through which this is done is most-often by way of a family of query language services denoted moreover as sparql.

Sparql family solutions include (but are not limited to) sparql-mm that provides support for multimedia, Sparql-FED that provides the means to query multiple end-points.

Somewhere around 2009 a rebranding attempt for Semantic Web (“SemWeb”) & RDF; to the term ‘Linked Data‘ was started to be made.  Whilst the implication and extensive nature of technology use is not well-known, it is indeed the case that the vast majority of web services contain RDF and are therefore constituent elements of the broader ‘semantic web’.

One of the ways this can be better understood is by reviewing the means through which ontologies are currently used and/or installing relevent plugins that also provide the necessary tools, to make it easier to see the ‘web of (structured) data’.

 

Introduction to Ontologies

On the Semantic Web, vocabularies define the concepts and relationships (also referred to as “terms”) used to describe and represent an area of concern. Vocabularies are used to classify the terms that can be used in a particular application, characterize possible relationships, and define possible constraints on using those terms. In practice, vocabularies can be very complex (with several thousands of terms) or very simple (describing one or two concepts only).

There is no clear division between what is referred to as “vocabularies” and “ontologies”. The trend is to use the word “ontology” for more complex, and possibly quite formal collection of terms, whereas “vocabulary” is used when such strict formalism is not necessarily used or only in a very loose sense. Vocabularies are the basic building blocks for inference techniques on the Semantic Web.

Source: https://www.w3.org/standards/semanticweb/ontology

The most commonly used Ontologies are schemaorg which is most notably used for powering search, alongside Open Graph Protocol, which is used to support the means through which web-content can be reposted on facebook.

Beyond these two notable examples, an array of public ontologies can be found through sites such as The Linked Open Data Cloud site.

 

Verifiable Claims (An Introduction)

An important part of human ‘identity’ is the way claims are made about a person, and in relation to a person.  Claims related ‘instruments’ are used throughout society, as to be relied upon in association to many interactions.

The W3 community group ‘credentials‘ was established to support works designed to deliver outcomes required in this area.

Part of related works produced include the open-badge version 2 specification which can be found here.

These works make use of RDF and URIs to support the development and use of claims made between an authority of some sort, and what may be called ‘the data subject’;  For instance,

A BANK-CARD
A person has a bankcard that supports their needs to make payments.  The banking card is owned by the financial institution providing the financial instrument or ‘card’.  The purpose of it being provided to the person, is to support their means to use the card to make use of their bank-account.

A BIRTH CERTIFICATE

A Birth Certificate is issued most-often, by a government. The ‘subject’ of that document is the person whom the certificate provides evidence about in relation to their birth.  The information presented by a birth certificate includes statements about whether or not the person is over a certain age (ie: over 18 or over 21), where they were born / nationality, who their parents were, etc.

A Postage Stamp

A postage stamp is applied to a item that is sent through the post.  The stamp, and related markings made by the postage service provider assists in verifying that envelope (and its contents) have been sent through the mail system at a particular point in time, etc.

SUMMARY

RDF based ‘verifiable claims’ provide the means to employ 3rd party, claims made in relation to people; as a constituent of the semantics employed in running, processing and subsequently presenting a query.

Tim Berners Lee – Turing Lecture

Prev 1 of 1 Next
Prev 1 of 1 Next

More information: https://amturing.acm.org/award_winners/berners-lee_8087960.cfm

Basic Media Analysis – Part 3 (Text & Metadata)

Metadata

“a set of data that describes and gives information about other data.”

Basic Media Analysis – Part 2 (visual)

This is a basic introduction which will be followed-up in latter posts.

Essentially, the analysis of video is for the most part, very similar to the analysis of images, as video is a stream of images.

The first element of analysing images / video, is that the files themselves contain an array of metadata in the file as part of the file creation process (depending on how its done).  The ‘metadata’ contained within images can include;

  • Time/Date it was taken
  • Where it was taken (IE: GPS coordinates)
  • Information about the device it was taken with
  • Copyright information / information about who took the image
  • other embedded image metadata.

These elements of data doesn’t rely upon the image or video being of good quality.  It’s simply data created as part of creating the file in the first place.  MetaPicz is an online example of an online application/service that provides an example of this information.   The next process is to analyse the content depicted in the image itself…

The first issue with analysing vision is ensuring the quality of the images is of sufficient quality as to identify, analyse and process the informatics available in the imagery.  Where its required, correcting the vision is a useful first step.  Beyond the usual processes, adjusting contrast, brightness and other standardised image processing methods; increasingly ‘super-resolution‘ processes are becoming available.

One process when using multiple still images is detailed here;


or alternatively, this guide on petapixel. Once these processes are done; others, involving the use of AI related processes, include those detailed here.

The time-consuming ‘trick’ is, to go through a multitude of processes with an appropriate ‘treatment methodology’, involving the use of ‘master’ and derivative content stacks; that in-turn requires tooling, inclusive of appropriate equipment, to do so effectively.

Once the source-material has been processes as to get the best possible visual quality; the means to produce ‘entity graphs’, or further additional ‘structured data’ converting objects the vision to a structured dataset.

One of the basic differences between video and still images is a timecode.  Ideally, storage of metadata/structured data in relation to video content, includes time-code information.

One of the seminal presentations with respect to the works in entity recognition in vision is the TedTalk presentation about ImageNet

In an attempt to make things easier, i’ll try to break down modern image analysis into a couple of different categories.

  • Identification of ‘Things’
  • Identification of ‘Persons’ or ‘Faces’
  • Identification of ‘Emotions’ or ‘gestures’
  • Biometrics – The identification of a unique living organism

There’s an ’emerging’ array of services available to the public that have an array of similar capabilities, to which ends, this post will not explore, other than highlighting this emergent field of ‘knowledge banking’, which is producing a significant mass of information leveraging scale of organisations, as to enhance AI / classification and intepretation technologies.  This is in-turn producing a core-asset for these organisations by way of providing API access, most-often on a fee-for-service basis, to users, as to enhance the services capability for enhanced analytics capabilities, SOME OF WHICH, they provide public access to by way of their online services.

To produce tooling that is truely ‘enhanced’ beyond traditional knowhow, it’s essential to DIY (“Do It Yourself”).

The easy way to outline services (in a simple way) is dot-points;

Once the data has been retrieved, database the informatics provided by the tools (inclusive of time-code if video) ideally in an RDF format. The usefulness of RDF provides for enabling the metadata / structured data, discovered in media, to be part of the broader database that is the web.

Basic Media Analysis – Part 1 (Audio)

When collecting materials, media files are long and often disused.  The process of turning voice from audio files into something useful, such as a transcript, once required a person to manuallytranscribe the audio (a service that is still available) rather than their being an accessible and accurate method, to do so.

Media, tells a story that incorporates different information to what can otherwise be found solely via text or other forms of metadata.   Whilst emotional intonations and other relevent capacities of audio analysis to machine readable formats is a constituent of what can be done, this guide will provide some basic examples of how to process Audio as to transcribe to text as to provide text based information that can be used for further analysis that will be covered in a different post.

ONLINE SERVICES & TOOLS

After a short amount of time searching for basic tools; three have been easily identified alongside the means in which to use YouTube to perform this action.

YOUTUBE

By uploading media to YouTube, YouTube can transcribe the audio automatically.   Searching google using terms like “Automatically transcribe audio using youTube” will easily pick it up.

A number of online services exist to provide automatic Audio to Text.  Many of them provide a free trial.  A few examples include;

Sonix: Sonic (invite link) provides 30 minutes free.

Trint also provides 30 minutes free.

SpokenOnline  also provides 30 minutes free.

Local Desktop Alternatives include products provided by Nuance who has a long-history in the field, producing solutions for multiple sectors.

 

Data Recovery: Laptop & Computers

Data Recovery on Computers and Laptops can be a complex tasks, and in most cases quite time-consuming.   In cases where physical hardware damage is the case of data-loss, the likelihood of getting the data back – goes down…

In past experience, even the same type of drive produced via a different batch; the parts won’t work on the old drive.   This is something to take into consideration if you or your organisation is storing important data.  When purchasing the storage devices (ie: IDE/SATA based drives that are not Solid state) it might well be worth purchasing a spare or two, or ensuring a spare is available; to strip the daughter-board off the drive, with the same manufacture codes; as to retrieve lost data in the case the daughter-board dies…

Furthermore; It is not advisable to create a stripped array over a multitude of disks, if you at all value the data you intend to store of that storage device.

If you’re just looking for an ultra-quick cache for content / data you have stored elsewhere; then, yeah.  just don’t trust it for long-term storage.

PROCESS FOR RETRIEVING DATA FROM ‘COMPUTERS’.

For non-technical people who don’t know the difference between the storage device and the ‘computer’; most computers have a storage device part that is able to be removed from a computer, even when the computer doesn’t work.

More common examples of where this happens; is where the drive is a little faulty, and kinda works, sometimes.  Or where the power-supply or some other part in the computer stopped working, and the data is ‘trapped’.

Other examples is where something bad has happened. You know there should be a record of it in the computer; but it’s not obvious, and, you want to check it out.

STEP 1: REMOVE THE STORAGE DEVICE

If you can’t remove the storage device, you’re not going to be having much joy.  Some newer computers have their storage devices fixed into the circuitry of the device and well; if it don’t work, you’re going to be in trouble.

for the majority of computers over the past 20+ years; they can be removed.

What you don’t want to be doing, is writing anything to that disk. that means, you don’t want to be turning it on or using it, until you’ve tried to get the data you want back.

If it’s simply a case of the computer dying, and you need to move the data to your new computer; that’s easier.

In any-case; find some screwdrivers that are suitable and disassemble the computer to find the hard-drive.  If you don’t know what your looking for,

a. search google for ‘hard drive’ images

b. get someone else to help you.

STEP 2: Plug the HDD into a new computer

The local computer shop has an array of cables and cradles that can help you plug your old hard-drive into a new computer.  Another option is to get an ‘external case’ for your old hard-drive; if you want to keep it about.

STEP 3: Download data

If you’re simply going to copy the data from ‘your old computer’ to ‘your new computer’, then that’s relatively straight forward.  Browse the directories on the hard-drive and copy them across to your new computers hard drive.

Job done.

If; you’ve lost data, the drive isn’t working so well or some other issue;  it becomes useful to get another drive with the same amount of space on it, as the one you’re intending to get the data-from, to use as a ‘working drive’ to copy all your files across to.

STEP 4: DATA RECOVERY

So, the first thing is; do not use the hard-drive you want to get data-from, as the disk you use to turn on the computer, etc.

If you want to get data-back, use a different hdd and plug-in the drive you want to recover data from.  It’s also useful to have a second drive, to put the data onto from the drive your recovering data from.

Goto google; search “data recovery software” to find something that will work on the computer you’re using.

Run the program, target the drive you want to recover from; and store the retrieved data on the disk you’ve got to back it up onto.

 

Data Recovery & Collection: Mobile Devices

Have you got a bunch of important messages on your phone and you’re wondering how you can store this data for safe-keeping.  Have you experienced an incident that has made you feel unsafe, and your wondering how to make a record of it to report it to your employer, school or police.

if you type into google ‘templates incident report’ you’ll find a bunch of example documents that you can use to make something that suits your purposes.

However; one of the problems might be that if you’re simply writing things out, perhaps the matter won’t be taken seriously…  not what anyone wants.

For this reason, and many others, below is an outline of how to get data out of your phone.   We’ll also cover the process just in-case you’ve ‘accidentally’ deleted important data on your phone already.  whilst the method is not 100% successful, it’s a process worth trying out, just in case it makes your life easier.

We’ll just focus on Android and iOS. Whilst there’s a few other options out there; the majority of the case, it’ll be one or the other.

Data collection off most “smart phones“, is most-often handled by some-app that’s connected to it; whether that be facebook, gmail, twitter or several photo apps, etc.  these systems all store the data within their apps and so, its alot more complicated to think about how to retrieve anything that may have been deleted within those apps; and indeed, the data is stored on the ‘cloud service’, in which case – its’ better to figure out how to download a copy.

However; Things like SMS’s & Call Logs are a little different.  these are generally not stored as part of a cloud-service and need to be retrieved from the phone.

PART 1: Lets start with a situation where the data you want has been deleted;

STEP 1.

Try not to use it and do not download anything to the phone in an attempt to get that data.

when a user tells the operating system managing the device to delete something, it’s generally not deleted.  it’s just ‘marked’ for deletion and is no-longer available through the graphical user-interface of the computer, making it ‘deleted’ as far as most people would know.  The space is then ‘freed-up’ which means the operating system knows that the area of the storage device used previously to store that data; can now be over-written with something else.

Whilst the process of writing to the storage device does not necessarily write over that specific part, its not really very controllable.  Sometimes data can remain for years; in other cases, it can be overwritten very, very quickly.

data recovery’ applications that seek the user to download something to the same disk; aren’t the types of tools you want to use.

STEP 2.

Find an application that works on a Laptop or Desktop Computer. A simple example of how to do this is to type into google ‘iPhone Data Recovery’ or ‘Android data recovery’.

Features you may want to look for;

– What types of data application supports retrieving.

– What formats the application outputs the records.

The benefit of obtaining data in a format such as CSV is that the data is thereafter more easily consumed by analytics tools to have a better look into what’s been going on; or how to present that, to others seeking evidence.

STEP 3.

Plug your mobile device into your computer & download the data.

STEP 4.

Make a copy of the data for back-up purposes, and do what you want with the working copy of the recovered data.

PART 2: Data that is still on the phone, and you don’t need to worry about any deleted records.

So, if the data is already on the phone and the whole ‘recovery’ process is unnecessary, then you’ll find a bunch of apps online that will work with your phone, on your phone, to collect and upload your data to a nominated location.

Importantly; if, you need to make a point about something – an issue you might want to consider is that the ‘metadata’ stored in the files is more easily manipulated when you take that data off the phone.  Whilst data-records like call-logs remain on the phone; it’s far, far more difficult to manipulate these records.  Therefore; in-terms of ‘evidence collection’, you might find taking a ‘screenshot’ of the data on the phone – to be an important part of your data-collection process.

Similar to the above examples – search google for ‘snapshot android’ or ‘snapshot iOS’ and the method to do so can easily be found.

PART 3: I’ve got voice-mail messages; and, the provider won’t give them to me.

The method i’ve found to obtain a copy; has been to use an audio recorder app, put the phone onto speakerphone, and whilst the audio-recorder is working; call the voicemail service and record the messages, including the information about when they were created, etc.

Once you have obtained these messages; use a audio editing application on a desktop or laptop computer and be sure to add the information about when the recording was made, etc.

Concluding remarks.

once you have the data you need, you might find it helpful to log the records chronologically; and have a look at any available metadata that might be available to you, to further illustrate a clearer picture to those who need to know.  Obviously, undertaking these sorts of tasks on innocent, unsuspecting 3rd parties without their knowledge is most likely, illegal, but moreover a gross breach of privacy and indeed trust.  In some cases, it may be that someone needs help to do these sorts of tasks; in which case, it’s recommended that any would-be ‘good samaritan’ goes about doing it, on the data-owners equipment as to ensure, no lazy copies end-up floating about unnecessarily.

Choice of Law

Choice of law is the method provided to govern the use of websites by way of international law.  Each area (ie: territory, state and country) has different laws and as the web developed databases to store peoples data; it was deemed to be too difficult to support the intepretation of law as it applies to individuals.

Each website therefore uses ‘choice of law‘ as a means to govern the use of their products (and your data).


The embedded map shows ‘choice of law‘ as it applies to some of the more popular products and services provided on the Web.

The examples provided above (as mapped in 2016) show where the ‘governing law’ is applied for the software based services provided world-wide.  The implications are enumerate. Put simply, the intepretation of many local laws, including Telecommunications and intellectual property; is governed by default on the intepretation of laws in the territory for which ‘choice of law‘ is claimed by way of the ‘terms of service‘ or agreement made when electing to use the website.

Whilst Governments and Enterprise may enter into agreements to vary the terms in which they use the products and services provided by these organisations; individuals, or citizens of sovereign countries  around the world are not reasonably able to do so.  Where disputes come about, it is expected that the user seek legal intervention by way of a court in the territory nominated by the ‘choice of law‘ contract; and indeed, if the user does not agree to this, then they should not ‘click the button’ which in-turn means, they should not use the website.  This becomes particularly difficult pragmatically when considering the implications of not using some of these sites and/or software services (ie: mobile phone operating systems).

Furthermore; governments do not hold the same expectations of legal responsibility over foreign nationals (‘legal aliens’) as they do for residential citizens.  This is a complex area of Web Science that has largely been left without broad community engagement, discussion and consideration.

Traditionally; software products, before the widespread use of internet, maintained their own ‘choice of law‘ in software licenses as to protect their products from wrongful exploitation and/or misuse.  These forms of principles are still, many would consider, quite reasonable.  Yet these principles have in-turn been applied to the accumulation of data that was previously stored by individuals (for example, on floppy disks) and is now stored by the website to which a license is granted as part of the usage agreement and its terms.

As Internet related technologies continue to develop modern, dynamic & ‘artificial intelligence’ empowered services through the utility of these services one might consider that perhaps the application of a legal framework as initially designed to protect the creative work of the software vendor; may not be the most appropriate asymmetrical framework to apply to the ‘knowledge economy’ powered by humans in conjunction with the use of these globally integrated products and services.

Meanwhile; regardless of how a ‘consumer’ data is stored or considered by way of law interoperable with participating entities; it is still expected that citizens maintain adherence to their own choice of law, as a ‘natural legal entity’ / consumer.

DISCLAIMER:

This article may contain errors.  For specific legal advice is it advised interested parties seek professional advice from a legal professional.

The WayBack Machine

The WayBack Machine in an archive of more than 299 billion webpages.  From the very first version of Google or wikipedia  to through to collections of the ‘pioneers‘ and the means to look-up most websites historically, to resource content that may have otherwise changed or is no-longer available on the web.

A Plugin for Chrome is also available which helps to easily find the latest copies of webpages that may have been removed or taken down.

Introduction to Maltego

What is Maltego?


Maltego is a tool that’s available on dual licensing enabling commercial use or freely, providing a tool that’s used to investigate relationships using data, then map, store and print reports from those investigative views that makes it far more difficult for others to ignore.

I first stumbled across it when reviewing the information provided by Facebook to professional users, such as app providers, and the means in which that data subsequently allows them to facilitate advanced behavioural analysis as an international commercial entity

(it’s generally rather difficult for a person to participate in society without social media accounts; the above video gives some insight into what that costs).

The commercial version of Maltego offers more features and plugins that are otherwise not available in the community edition.  One example is the Social Links framework that provides enhancements for social-network analysis than is otherwise provided ‘out of the box’ by the Maltego community edition.  Whilst SocialLinks is only one example, their videos can be found here.

 

What is Open Source Intelligence?

Open Source Intelligence (OSINT) is a term used to refer to the data collected from publicly available sources to be used in an intelligence context. In the intelligence community, the term “open” refers to overt, publicly available sources (as opposed to covert or clandestine sources).”
In simple terms, Open Source Intelligence is the use of publicly available datasources and the data sources made available to you (legitimately) to communicate an issue using data that you’re inspired enough to undertake the exhaustive task of collecting, collating and representing that issue in a manner that may be taken more seriously by law enforcement officials, lawyers, medical clinicians or other parties you provide that information to.
Indeed, so long as it’s public information and the information you have rightful use to use; and that information is true and correct, with appropriate consideration is unlikely anyone is able to stop or punish you for publishing it on a website, mark it up with the appropriate tools and create an outcome that’s likely to be  top search result for those mentioned in the output.

IMHO it’s important not to make problems those of children or others. Where others fail to do their job, the means made available by OSINT techniques can clarify circumstances in ways that may be reviewed by others as to resolve problems.  It’s important not to be a problem whilst using these techniques.

Does Anonymity exist?

The easiest way to answer this question is; in effect, no.  99% of all issues pertaining to a circumstance where it is said that the information doesn’t exist, is actually a problem of who has access to that information and the cost of obtaining it, rather than an honest circumstance where the data actually does not exist to substantiate a circumstance.

Whilst it took many years to convince leading, high-value organisations that the internet was a useful and worthwhile investment; their investments now, similarly to their investments in past, instigate controls over the internet that make it very difficult to genuinely do anything without leaving traces of those actions on the internet somewhere.  

The bigger problem; is that this information is not available to the majority of victims who have been harmed by the unlawful behaviours of others; and in many circumstances, it’s illegal to collect that information for the purposes of participation in rule of law fully; as a subset of the guiding principles that operate our society.  

These problems are thereafter not technical in nature; but rather, socio-political.  If public servants are found to be doing the wrong thing; that would cost the government, if they were easily able to provide that information to a court of law in a manner required by that court to effectively evaluate a circumstance.  If powerful married men with families want to engage in sex with those who are not their wives, and their wives at times do the same; then whilst the ‘data’ may exist, it’s not available, regardless of the subsequent harm an acrimonious relationship may cause children.

Most organisations use sophisticated computing systems to manage their accounting, stock-management and related business records; yet we are still provided thermally printed paper receipts that fade in sunlight.

Mobile phones continuously track the whereabouts (and speed of travel) of it as a device; but this is not available for the purpose of dealing with traffic infringements. New vehicles can tell whether someone is wearing a seatbelt; but a special device is needed to get that information.

Our web-usage is continuously tracked, the websites we use can figure out when we sleep (due to lack of activity on mobile devices, et.al.) and whilst these things all form part of what is used for crimes that pertain to significant financial loss of government entities, it is more often than not suggested ‘not to exist’, and save particularly ‘special circumstances’ are not made available to a citizen seeking lawful remedy.  

Whilst it is true that some, particularly skilled, dedicated and well-financed individuals can form circumstances in which their actions are made ‘anonymous’ or unable to be identified; this is simply not reality for the vast majority living in our modern ‘connected’ age.  

So, whereas whether living in a democracy or otherwise; we seek ‘lawful remedy’ the question becomes how exactly it is that we go about achieving this, when we may be discouraged by others to do so.

An introduction to Virtual Machines.

A virtual machine (VM) is an emulation of a computer system through the use of specialised software on a host computing system.   Virtual Machines (VM’s) are used throughout the internet for hosting systems, websites and other resources for an array of purposes including the means to scale a solution from using limited hardware resources as a small site or solution; through to managing the hardware requirements for that solution as it grows.  Other uses of VM’s include developers who want to test and/or develop websites, technology professionals who need to test particular forms of software, figure out or manage security risks such as malware; and an array of other purposes that make the use of VMs very, very popular.  

On a less sophisticated basis; VM’s offer the means to run any type of operating system, as an application on most computers or laptops, so it doesn’t matter if you have a mac or a pc; you can run whatever operating system you want in a VM and it will load on your machine when you want to use it, and can be turned-of whenever you want to turn it off – without leaving problems on your host-machine. Because the Virtual Machine is an independent environment, from the operating system right through to any and all applications that run within it, whatever you do in that environment is stored within the virtual machine rather than in your normal computing environment.  It’s also possible to put a VM on a USB key and load it on other machines, or share the work you’ve done in the VM by simply copying the VM on a USB Key and giving it to someone (with the relevant details) for them to review of store safely for you.

A commonly used application for creating Virtual Machines is VirtualBox.  

Virtual Machines can be used to create a cleaner computing environment that can be used for some sort of specific purpose that you don’t want to be stored on your every day computing environment.  In this way, virtual machines are an effective means to deal with other web-persistence issues, ideally also alongside the use of a VPN.

Web-Persistence

I’m not quite sure what to title this section.  Many speak of this concept as digital identity persistence, yet often it’s not the person that is subject to ‘web persistence’ but rather the machine or home network address that provides persistent information about the characteristics of a user; regardless of who the actual user is.  

This can end-up with an array of unfortunate situations.  A father, mother or other adult in a household who enjoys adult material; may unwittingly alter the website advertising being provided to others in the household who use the same internet connection (children included!). Families who share machines and the accounts set-up in those machines may create web-experiences that pollinate in different ways irrespective of the user at the time.  

These issues pertain to what i’ll call ‘web persistence’ in describing the circumstance that the use of internet is tracked by operators of the internet who work with whatever information they can get, altering the use of internet on that machine, in that location, from whatever account on the machine the user is using; in an effort sought by them to make money through your use of internet and they do this through the use of identifiers, and ‘scraps’ of information left-over from previous uses in relation to those identifiers.  The systems that collect this information is not simply the website you intend to go to, but also the services that websites uses as part of providing the functionality delivered by the sites you visit.  When thinking about this from a security point of view, the term that is used is ‘vectors’.  The concept being ‘attack vectors’ or ‘security vectors’ or other forms of ‘vectors’ that can be used to trace, track and identify.

An easy way to understand the different ways this may occur is by considering the OSI Model.

Each Machine has a MAC ADDRESS, which in-turn connects to a network and is provided an IP ADDRESS.  From there, your machine forms a fingerprint.

Parts of this digital fingerprint includes your User Account, the host IP Address used by the network you are able to be sent webpages from publicly and the information stored by your browser, such as cookies, or login information within your browser that websites may use to infer you were using the internet in a particular way; regardless of whether it was you at your keyboard or someone else.