On Borrowed Time

I have just finished reading my brothers report:
'On Borrowed Time
Avoiding fiscal catastrophe by transforming the state’s intergenerational responsibilities'

It is available here:-
Adam Smith Institute, author Miles Saltiel: On Borrowed Time 8/12/10 PDF document.
Link updated to own storage 15/05/2017.
It will never catch on. The fundamental objection to the argument for a return to laisser faire is that it would expose people to the harshness or an un-reconstituted market such as we already see with the immigration of cheap labour where people are prepared (are forced and exploited into) living several to a room.
However, I find myself very sympathetic to the thrust of his arguments and the vision he articulates.
I suppose I believe that the very faulty and incredibly wasteful of human potential system we have is a social compact against worse human behaviour that is the risk of laisser faire.
In short, I don't think we are capable of it, despite its appeal.
Read his report and come to your own conclusions.

The Nokia 1100 is selling for over $30,000 - to hackers

The Nokia 1100 is selling for over $30,000 - to hackers

Em. But foaf+ssl would work after any bank authentication procedure. I don't think the way banks abuse TAN would be relevant to this. This is interesting because tends to show the potential security of the approach.
Needs investigating further though, and steps fleshed out for different scenarios.
Naomi Klein Oil Spill - Guardian 19/06/2010 http://ping.fm/PtYj5

From the top

From the top



Image by art_ikon via Flickr
I realise that I am never going to fit into my work environment.
I don't think that I can blog about this impartially. But, perhaps, usefully?
Before the work, specific lets set the scene.
We have a government that wants to grow and promote the intellectual capital of this country.
How are we doing?
The answer is probably surprisingly well despite, not because of, the government.
This applies particularly to my field of IT.
Translating requirements into an application is one thing, but translating efficiently, creatively and offering true value for money is something entirely different.
It is quite clear that government has no criteria for the later, and therefore, no depth of understanding of the subtle consequences of decisions that ensue from contractual arrangements downwards.
Largely the government in their many IT endeavours are ensnared by large IT companies who make it their business to justify the highest possible costs over the short and medium term.
Because those companies cannot show net profits above around 5% in the public sector they find other ways to make money: the most common pattern is that of over complicating requirements gathering and elongating the length of time it takes to fulfil an item of work. Attendant patterns are non-cooperation with IT partners and back loading costs to non-IT ancillary services.
All of this comes about because the Civil Service is inattentive to contractual detail. They are undermanned and under skilled in their oversite roles. They also consistently chose large suppliers rather than a series of small suppliers, which means that they delegate structural organisation in a way that discourages competitive innovation.
Let's look at the consequences of this.
The government could be promoting exemplary projects, and it would be important if they did so.
Three truths:
IT is a fluid field with much to learn, new ways of doing things on every level of the project. Experimentation coupled with uncertainty is the norm. This is so much the case that it cannot be said that further experimentation necessarily increases risk (within some parameters). Experimentation, trying things where results are uncertain, can reduce risk.
This truth is fundamental to understanding good IT governance.
The second truth is that top-down governance is intrinsically flawed. The larger the pyramid the larger the mass of detail that is essentially unknown and, hence, contains hidden risk.
The third truth is that large pyramids are intrinsically unstable and dangerous when one needs to interact with another, at whatever level up or down the pyramid.



The long, flat base


Image via Wikipedia
Notice how pyramids, as such, are functionally useless and are not part of a modern construction repertoire, by which I do not mean pyramid-shaped buildings, but actual pyramids. But emblematic they are and rightfully so. Engineering is more about bridges, road and rail lines of communication and various sorts of buildings, depending on function.
Pyramids, though, are emblematic of human structures, not without reason. The repetitious work is greatest at the base, hardest higher up. Slavery is equal at any point. How many people had to be ground to dust to prevent the remaining one of two that were hidden their midst from also being ground to dust? Large companies are similar, unless they are very skilful in their people management.
This is the problem:-
The broader the base the more pointless and tedious each component task, because each task at the base withstands a huge amount of pressure from above.
This is not intelligent design.
The pyramid, itself, is not an intelligent structure. But even if it were a bridge there are trade-offs between one massive bridge and several small ones. For instance one massive bridge would never have worked as a solution for bridging the Thames in London.
But this is what government does with IT, completely unnecessarily given IT's flexible and scalable nature.
These solutions have three disastrous consequences.
They cannot be efficient solutions.
They cannot be optimum solutions because they squeeze creativity out of those at the bottom who must implements them.
They cannot be economically competitive because they distort the market and deprive smaller companies of opportunity. What competition there is is at the expense of the first two points and this is ultimately disastrous for any policy that is meant to promote intellectual capital in this country.


Project National Health - policy driven IT

Innovation
Image via Wikipedia
How can an IT project grow from 2.3 billion to over 12 and no-one question the manner by which the project is set up? I find it very odd that any country can engage in such a huge project in such a wasteful way. We know that government want to mitigate risk by risk transfer and that can only work if the company is sufficiently solid to bear the putative risk. At first sight this seems obvious and good business. However, look at the figures and imagine the cost of this risk mitigation. No sensible project would be underwritten to that degree. The reality is that large contractors revisit the risk on the government, just as the banks have. And the reason for this is poor, unimaginative management.
The NHS IT project has been an absolute disaster for the tax payer and for the IT industry in this country. For the former in terms of value for money and for the later in terms of boosting innovation and competitiveness. What I find very difficult to understand is how it is that a government can expound the virtues of intellectual capital and competitiveness on the one hand and act in such a crass and destructive manner on the other?
Enhanced by Zemanta
The reality is that large contractors revisit the risk on the government, just as the banks have.
The dynamic is not the same as it is an absolute that the banks should not fail, but, in that this project is a flagship of government policy and the government does not want to be seen to fail, this project cannot fail. Except that, by any sensible measure, it already has failed.
Feed the insatiable appetite of large IT companies has decimated innovation that can only thrive in the competitive environment that is fostered by healthy small firms.
To understand all of this fundamental principals have to be revisited.
Definitions of intellectual labour need to be understood, especially in the context of IT engineering, something that government simply just does not get at all.
I shall lay some trails.
One of the strong motives to Open Source software is subversive, while the other is that of wishing not to have to repeat behind closed doors what may need to be done only once if the doors are open.

SemanticC: Open Contract http://ping.fm/xnWjM

Open Contract

Published on Friday, June 11th 2010. Edited by Rat Outzipape. tag

Cameron's Strategy of Open Government
I completely agree with David Cameron's strategy of opening up data including salaries in the Civil Service.
There is something about this that the Prime Minister cannot say, so I will say it on his behalf.
This policy is designed to tackle any hint of corruption.
This means there have been many hints of corruption and malpractice in dealings between the Civil Service and its suppliers. This also means that a Civil Service that once had an unequalled international reputation for propriety can no longer be fully trusted. That shoddy and sometimes shady business practice has gained the currency of acceptability in the corridors of power.
No moment of reflection is required to see how appalling this really is.
The Weapon of Open Data
Open data could prove the most powerful weapon ever unleashed in this arena because it naturally leads to requests for open contracts and open scrutiny of existing contracts, many of which now become non-legal due to breach.
The Integrity of the Incumbents
Beware of those to whom these contracts are let as they fight to the last for their revenue streams.
They will also fight to refute any impugning of their integrity.
On this last they are the most vulnerable.
They are vulnerable because, of all accusations, people do not like to be told that they are doing something wrong in a moral or reputational sense.
The Example of the Gulf Oil Spill
The BP Gulf of Mexico oil spill is an example of resistance to impugning a reputation made personal.
There is too much detail to describe the whole range of these distressing circumstances but just looking at the pressure President Obama is bringing to bear on the BP and the unfortunate comments of its Chief Executive, Tony Hayward is illustration enough.
The point is the way that responsibility is being redefined in the context of an environmental catastrophe.
Quite correctly, Obama is not letting responsibility be defined by business and legal rules, that is who owned the rig, for instance.
For the Whitehouse it is all BP's responsibility.
The Parallel with Government Supplier Contractual Relationships
In the dealings between government and its suppliers things are slightly more nuanced.
However the parallel that can be drawn is the environmental catastrophe as a metaphor for the excessive amount being spent, and therefore wasted, on IT with all of the very dire consequences of this (a sort of pollution in itself).
And, while there is no single entity like BP who is responsible, there is the collective arrangements between government and its suppliers where the responsible parties have not been able to take control of the forming and execution of the contracts they have entered into.
That is the moment of 'corruption', the moment that an undertaking is given by both sides that both sides know will not be adhered to and will always lead to a quiet life for the client and a 'get out of jail' (no real onerous penalties) for the contractor.
Benefits to the Government Contractual Party
While it is straight forward to see what benefit might be in this arrangement for the contractor it is more difficult to see the benefits for government.
Is my notion of corruption in this context too attenuated?
First of all I should say that I have come across pill sweeteners. The promise of board appointments on retirement, for instance.
I have also come across figure mangling, costs being lost or hidden and KPIs being redefined from their original contractual intent under the knowing eye of the most senior Civil Servant responsible for the contract.
But I don't think any of this, common though it maybe, is really the heart of what I would call corruption.
It may seem like these are the issues that must be pointed out and sorted out but they are a distraction from the real issue. That lies in the reluctance or inability of the customer to keep in view value for money.
The customer has immediately been compromised by the nature of the way in which contracts are written and awarded.
The Illusion of Tax Payer Benefit
My idea of corruption and what the benefit is for government is this: government enter into the illusion that while they are spending several times more than what a project should cost, extracted from them by dint of supplier lock-in, somehow this extra spend, so elaborately justified by the supplier, is value for money.
The corruption is the play of forces between very poorly written contracts that become impossible to be fulfilled in their specifics to the customer satisfaction, the monopoly to whom the contracts are offered and the exaggerated and often spurious benefits offered by suppliers that government accept as a substitute for the full contractual satisfaction.
The Open Contract Business Proposition
All of this sort of malpractice will become easy to detect, the first step to rooting this out, with what I am calling open contracts.
Much has been said about open source and open source in relation to government supply.
I no longer think that open source is the issue.
It is a weak business model to propose to take work already done and repackage it on the basis that only the repackaging will have to be paid, which is often what the open source proposition sounds like.
The strong business model is that there are optimum sized teams (in numbers and by value) to deal with the complex operation of 'repackaging' i.e. integration, which entails intimate knowledge of the possibilities of the proposed solution and the customer requirements.
The entire value is in how customer requirements are translated into a solution.
The Increased Market Share To SMEs
To understand the real value proposition we have to step back and look at the larger picture briefly. Total IT spending is £14 billion/year.
As I don't know I shall invent some further figures.
Half of this is software related and half of that could be fulfilled for a tenth of the cost (I believe this last figure).
This means available savings in software procurement run at about £3 billion/year.
Vince Cable has also said that 25% of all ICT procurement should be through SMBs, I take this to mean by value, although the statement I read was not clear. This is a tricky but important detail as SMBs should be bidding in at much lower cost for equivalent function points and bidding for much smaller contracts.
This should mean, if played fair, that even where the government does the most obvious thing to cut costs in ICT by simply cutting budgets (10%-20%?), the volume of work into SMBs is set to rise radically.
The first responsibility of SMBs here, then, is to ensure this is played fair.
The Attitude of Centralised Control
From the above we can see that the very large IT project spend has an appeal that rests in cultural attitudes to cost and control that include a belief in its effectiveness in providing value for money by various measures: meeting KPIs by the supplier and keeping management costs down in the customer.
This positive perception is compounded by a reluctance to have these perceptions challenged by a decentralised model.
The decentralised model fundamentally challenges the ethics of centralised control, and that rankles the most above all for administrators schooled and experienced in the centralised model so actively promoted by the last administration and the Troy administration preceding it.
It is far better to have this issue out in the open well acknowledged than to avoid it.
The Decentralised Model of Control
The decentralised model relies on forces inherent in the overall business process to ensure cost, quality and control as much as the specific monitoring of many contracts.
These are the forces of the market.
There is another way of explaining this.
It is a misunderstanding that large commissioning specifically in the field of software development is any less time consuming than managing several contracts with different suppliers.
It is just that here techniques would have to be adjusted. This is a difficult area as the natural response, and one that government is trying in some areas, is to introduce a technical provisioning layer in the form of another contractor. Unfortunately this will lead to the same problems that already exists where that contractor then becomes the large supplier.
Five Broad Principals of Integration
There are five broad principals that can be extracted from the integration of small projects that are entirely lost in large projects.
1. The First Law of Modularisation.
This should be established as a practice in the initial contract. If any dimension of the contract is anticipated to burst through a predefined ceiling it should be subject to competitive rebid. Each identifiable module must be built in such a way that it can be utilised painlessly by another module.
2. Customer requirements can be throttled back.
The most powerful incentive to this is the low bid placed by the supplier. By mutual understanding every contract should have an element of work on requirements thinning.
3. Software is never complete and there is a long tail which is first improvement then moving into maintenance.
Each of these issues must be dealt with very strictly. The First Law must be applied. More over there is a relationship between these three phases and the rate at which money is spent in any one period. Wise provision will curtail front loading thus increasing quality and expected life while decreasing cost over the initial period.
4. Components must be interoperable.
Processes of interoperability can be highlighted, captured and themselves become modules comprised of components. There is no longer any mystery about this art. It just needs to be applied consistently. No longer the £100 k price tag for changing a URL on a server! Or £10 million price tag on software when evaluated on function points should cost no more than £1 million, but is justified because it 'reuses or integrates' with an existing system!
Spreading costs more evenly as in 3., an approach that is only viable with small interoperable modules.
5. Processes must reflect the reality of software development.
As with 4. it must be recognised that there are orthogonal processes entailed by but not immediately apparent in a problem domain. These are primarily the unavailability of the needed or the expected, whether this be a URL or data. Modern applications dealing with LOD in a RESTful manner can ease this situation.
But it is crucial to understand the necessity of systematically identifying these processes.
It is this core knowledge that needs to be extracted from the few privileged suppliers for anything like a level playing field and fair chance of success.
The Open Contract
The Open Contract is a concept designed to facilitate the five points mentioned.
It extends to the divulgence of source code, what previously will be covered by Crown Copyright will now be made available under OS terms.
This is one of the most astonishing and forward looking commitments made by David Cameron, and yet there has been very little comment about it.
It behoves SME interested in open source to take this point up vigorously.
To release code as open source also implies a whole management process to bring this about dovetailing closely with the concept of Open Contract.
Here, it must be borne in mind the discipline that open source imposes on code repositories for any benefit to be derived from them that is far greater than that typical of code management in government projects. From the above, taking point 5., in particular, pressure must be mounted on Whitehall to divulge what has previously been treated with commercial confidentiality: contracts with suppliers.
To achieve this some compromise may have to be reached.
There is no doubt that existing contracts hold a wealth of detail as well as making commitments that, on scrutiny, may not have been fulfilled.
It is the possibility of the latter that may provide the leverage necessary for the former to be revealed sufficiently for SMEs to begin to formulate their own strategies in this market.
And then, with the Open Contract process in place, it remains a level playing field for all concerned. We know the Open Contract will prevent such practices as sweeteners whereby a price is bid low to gain a foothold and, preferably, lock in, on behalf of a supplier.
However, the issue remains of whether competitive advantage would be lost to the putative supplier making it unattractive for them to enter into the bidding process at all.
These are open issues.
For instance on what basis would such bids be assessed?
It is clear that the technical skills of one supplier will not necessarily be equivalent to an other. Success will surely be a function of the aptness of the proposed solution combined with the possibility of technical success within time and cost constraints.
How this is measured will prove crucial to the functioning of these innovations.
Finally
As more variety of technical solutions are being encouraged, each with the absolute constraint of interoperability in their context with other solutions, expect a flowering of diversity and innovation.

Bentham's Panopticon and Orwell's Nineteen Eighty-Four

The theory of Panoptical control: Bentham's Panopticon and Orwell's Nineteen Eighty-Four. Harry Strub. 2006; Journal of the History of the Behavioral Sciences - Wiley InterScience http://ping.fm/NqPto

Diaspora Part Six - The Diaspora Vision

Adam Saltiel - SemanticC
Diaspora Part Six - The Diaspora Vision
Published on Tuesday, June 1st 2010. Edited by Rat Outzipape. Updated on Tuesday, June 1st 2010 at 7:56 PM.Recent Facebook changes, Has Anything Changed? I promissed that I would review the recent...
Published on Tuesday, June 1st 2010. Edited by Rat Outzipape. tag
info Updated on Tuesday, June 1st 2010 at 7:56 PM.
Recent Facebook changes, Has Anything Changed?
I promissed that I would review the recent changes to Facebook in the light of Diaspora as a Facebook alternative.
These are the most interesting articles that I have found about the changes to Facebook privacy settings.
Two are from the UK newspaper, The Guardian.
OK, I have my bias, but I did read other sources too.
Facebook: our hiccups on privacy
The first few comments are worth reading as well.
Facebook Privacy Settings Red Herring
This is a commercial blog post.
List Helpful Posts And Videos About Facebook Privacy Settings
10 Reasons Why Privacy Issues Won't Drive People Away from Facebook
The main points get repeated so I wont repeat them here.
My conclusion from reading these articles is that the near distant future is of more interest than the immediate effect of slight adjustments to the Facebook privacy control user WGUI.
The Crunch Question
What seems to be a crunch question is whether people in large numbers want an alternative and what sort.
There is a difference between creating a project that is Open Source that has the attention of a few thousand potential users to creating a viable and sustained alternative Social Networking medium.


Finding an Alternative Answer
There is another perspective to be had on this.
Social Networking is a medium that is evolving out of the substrate of the Internet.
The ease of use, including choice over exposure, association and privacy is increasingly going to become integral to the substrate of this medium. It is being built in.
This means that the Internet is evolving. As it does so its users will find many different ways of dealing with these issues.
In this regard all the articles I have read have been short sighted.


The Diaspora Vision
I think that the Diaspora team has this vision.
It maybe that there will be difficulties in finding how to pitch Diaspora, which parts work and which do not; difficulties in how users will adopt varous aspects of what it offers.
But finally Diaspora is part of this evolving Internet. How people come to think of the Web and use it will be influenced by the possibilities that Diaspora make available to users as it finds its own place in this evolution.


A Legislative Framework For The Decentralised World
Although the UK broadcasting model is informative, I doubt this model will be adopted by the Internet, in fact I think broadcasting in the UK is going to be subject to less control in the near future. Anyway, that model of control depends to a large extent on centralised authority, the very thing that the new Internet, Diaspora, will begin to dispell.
What I do think is that the means of delivering a new user experience will improve and that this will mean that the need for large social networking sites will change. To survive, they will have to offer something else to keep users interested.
They, too, will have to correspond with the new Internet.

Adam Saltiel


June 2010

Resources
1: Facebook: our hiccups on privacy
2: Facebook Privacy Settings Red Herring
3: List Helpful Posts And Videos About Facebook Privacy Settings
4: 10 Reasons Why Privacy Issues Won't Drive People Away from Facebook

Published on Tuesday, June 1st 2010. Edited by Rat Outzipape. tag

Recent Facebook changes, Has Anything Changed?
I promissed that I would review the recent changes to Facebook in the light of Diaspora as a Facebook alternative.
These are the most interesting articles that I have found about the changes to Facebook privacy settings.
Two are from the UK newspaper, The Guardian.
OK, I have my bias, but I did read other sources too.
Facebook: our hiccups on privacy
The first few comments are worth reading as well.
Facebook Privacy Settings Red Herring
This is a commercial blog post.
List Helpful Posts And Videos About Facebook Privacy Settings
10 Reasons Why Privacy Issues Won't Drive People Away from Facebook
The main points get repeated so I wont repeat them here.
My conclusion from reading these articles is that the near distant future is of more interest than the immediate effect of slight adjustments to the Facebook privacy control user WGUI.

The Crunch Question
What seems to be a crunch question is whether people in large numbers want an alternative and what sort.
There is a difference between creating a project that is Open Source that has the attention of a few thousand potential users to creating a viable and sustained alternative Social Networking medium.


Finding an Alternative Answer
There is another perspective to be had on this.
Social Networking is a medium that is evolving out of the substrate of the Internet.
The ease of use, including choice over exposure, association and privacy is increasingly going to become integral to the substrate of this medium. It is being built in.
This means that the Internet is evolving. As it does so its users will find many different ways of dealing with these issues.
In this regard all the articles I have read have been short sighted.


The Diaspora Vision
I think that the Diaspora team has this vision.
It maybe that there will be difficulties in finding how to pitch Diaspora, which parts work and which do not; difficulties in how users will adopt varous aspects of what it offers.
But finally Diaspora is part of this evolving Internet. How people come to think of the Web and use it will be influenced by the possibilities that Diaspora make available to users as it finds its own place in this evolution.


A Legislative Framework For The Decentralised World
Although the UK broadcasting model is informative, I doubt this model will be adopted by the Internet, in fact I think broadcasting in the UK is going to be subject to less control in the near future. Anyway, that model of control depends to a large extent on centralised authority, the very thing that the new Internet, Diaspora, will begin to dispell.
What I do think is that the means of delivering a new user experience will improve and that this will mean that the need for large social networking sites will change. To survive, they will have to offer something else to keep users interested.
They, too, will have to correspond with the new Internet.

Adam Saltiel


June 2010

Resources
1: Facebook: our hiccups on privacy
2: Facebook Privacy Settings Red Herring
3: List Helpful Posts And Videos About Facebook Privacy Settings
4: 10 Reasons Why Privacy Issues Won't Drive People Away from Facebook
 

Well cut lawn. It is a bit of a mess. Will have to attach photo at some point. My thing was I wanted to preserve blue bells also for insects
David Cameron wants Government contracts data to be made available. I wonder what is meant by this and how I could find out? This is the beginning of a huge change. But it needs to be managed, or either it will be chaos or re-appropriated by big business. Will have to look into this!
BBC News - David Cameron to make more government data available http://ping.fm/Hoanu
Mr Cameron said he wanted to rip off the "cloak of secrecy" around government and public services - and extend transparency as far as possible.
Data being made available includes items of major government spending and the pay of top civil servants.
Mr Cameron outlined plans for freeing up the information in a letter to all government departments.
He said: "Greater transparency is at the heart of our shared commitment to enable the public to hold politicians and public bodies to account."
Details of large government contracts will be published from September, items of central government spending from November and local government spending over £500 from next January.
I am surprised by the lack of response to this initiative. It seems that, as yet, big money has a very tight strangle hold on this whole industry segment.
Oreilly Radar
When I wrote last week about Facebook privacy flap, I was speaking out of the frustration that many technologists with a sense of perspective feel when we see uninformed media hysteria about the impact of new technology. (How many of you remember all the scare stories about the risks of using a credit card online from back in the mid-1990s, all of them ignoring the risks that consumers blithely took for granted in the offline world?)

SemanticC: Joindiaspora Post - Notification of Blog Entries

Join Diaspora

Joindiaspora Post - Notification of Blog Entries

About a week or so ago I promised that I would write up an exploration in the Diaspora project space and post it on my own blog.
This has now become five blog posts!
I have not written a final, sixth part, which will be a review of the changes introduced by Facebook just now, in the light of what I have learnt about the Social Networking space.
I publish on two blogs one at



http://conjoint.biz/recent



  • and the other at
    http://semanticc.blogspot.com/




  • http://conjoint.biz:1080/recent - to the recent entries index - is hosted on my own machine and is a nanoki lua wiki.

    If you are interested in any of that you can look up links on either site.
    However there are lua implementations on mobile devices and the design is strictly RESTful - in fact it is an object lesson in good design and creative thinking by its creator and lua contributor Raphaƫl Szwarc. lua is moon in Portugese, it is one of several jokes of the wiki.

    Enough of this irrelevance though!

    Diaspora Part One


    In this part, after a general introduction, I introduce the problem from a business persepective and what may be the issues confronting social networking sites, for example Facebook. I introduce an hypothesis about the reasons for user dissatisfaction and try to illuminate the areas that this dissatisfaction covers. I examine nine points and it does seem to me that much (7 out of 9 points) of what people may object to in Facebook is common marketing practice, the boundaries of which it does seem are being pushed by Facebook. I will cover this in more detail in the final section. But I am very interested in what others think about this.

    I haven't had time to follow up all the links for the final article. One thing I did notice is that Facebook describe what they offer to advertisers as the march of progress and also point out something I had noticed before, that they sell the package to their customers, they do not allow their customers access to the data to trawl themselves. I do find this last point disingenious as, quite obviously, they want to be the gate keepers to that data, that is, after all. how they make their money.
    Diaspora Part Two
    Here I offer my version of what Diaspora may be in a nut shell and contrast it in three basic points with Facebook.
    I then broaden out to look at W3C, as a standards body, and some of their work in the areas of our concern.
    Diaspora Part Three
    In Part Three I look at issues of security, privacy and trust, defining the terms in a more technical way. I also look at attendant issues, such as reliability of service. I contrast a peer to peer solution with a highly centralised one such as Facebook drawing some conclusions about quality of service and the way user expectations may be met.
    Diaspora Part Four
    In this part I look deeper at the technical detail of two proposed solutions.
    As I do not know what Diaspora are intending apart from brief comments in videos I extrapolate from those comments as best I can and contrast them with another possible solution based on an identification and authorisation framework known as foaf+ssl, which the Social Web Architect Henry Story writes about in a series of blogs. I really favour the most ubiquitous and open solution and that seems to me to be that of something based on foaf+ssl in the HTTP protocol (both for ubiquity and foaf to open up to the possibilites of the semantic web.)

    Most importantly I point out that either solution could be hosted on existing always on devices, that is home DSL units.

    This is a subject for further blog posts.
    Diaspora Part Five
    This final part offers a brief insight into some of the issues currently being discussed in the semantic web space. It isn't an evaluation of progress, nor a recommendation of subjects to tackle.

    In fact there is something unsatisfactory about this as there is an air of research rather than application about it all. i believe that impression may be misleading.

    Again, I may take these issues up at some later date.


    Here are the same links again from conjoint.biz.

    conjoint.biz:Diaspora Part One





    conjoint.biz:Diaspora Part Two





    conjoint.biz:Diaspora Part Three





    conjoint.biz:Diaspora Part Four





    conjoint.biz:Diaspora Part Five

    Published on Friday, May 28th 2010. Edited by Rat Outzipape. tag

    Background
    I have divided the Diaspora problem domain into the areas below.
    This reflects my own research, interests and thoughts.
    Why am I interested in this area?
    There are several reasons for this interest.
    In broad brush strokes -
    1. I have long been interested in the Semantic Web.
    2. I have been working on commercial applications whose technology intersects strongly with some of the areas below.
    3. Open Source remains the best learning playground and there is always more to be learnt.
    4. Activitystreams (as they are called) provides good test data for work on other filtering etc. concepts including algorithms and display techniques, that interest me.
    I am not trying to pre-empt the activity of the core Diaspora group or to second guess them. I am more interested in seeing where my thoughts about the issues below intersect with how these various problems i outline are resolved.
    These problems includes issues about protocols which determine the scope and behaviour of the system under development, as well as development methodology which determines the human behaviour of the group.
    However, in this post I leave off discussion of development methodology. This shouldn't be taken to mean I am harbouring some secret criticism, far from it.
    So far the the Diaspora core have raised fair funds to take their efforts well into 2011. A very fortunate position to be in.
    They have also responded, as I expected, to the surge in interest very responsibly.
    They have set up a group of advisers, necessary due to the large spend they now have at their disposal and the intense interest in the project this necessarily generates.
    Social Networking Solutions
    Group:GNUSocial/ProjectComparison
    Needless to say, the area is not unknown, there is already a lot of work in it, but it hasn't been brought together into a viable offering quite like what it seems the Diaspora team have in mind.
    This discussion, while referring to Diaspora, is in most places equally applicable to GNU Social and some other projects except that Diaspora is entirely uncharted, allowing greater range to creative speculation. I do not assume that my coverage, or that of my sources, is absolutely comprehensive. I would like this overview to be rounded and in depth though.
    The Conflict of Commercialisation
    In reading my entries where issues of privacy and security are mentioned it is helpful to bear in mind that there is a gap between user expectations and actual practice. It may be that in some cases that pertain to privacy and security there are tools available that the user does not use, while in other cases there may be none of tools, internal mechanisms, standards or laws. I have not tried to be absolutely thorough in identifying each instance but present the general picture.
    Further background for a picture of what the Social Semantic Web could be can be found in Jeff Sayre's article A Flock of Twitters Decentralized Semantic microblogging, however, my emphasis here is not the same as Jeff's. Jeff Sayre gives a good description of what knowledge streams might be like in terms of the filtering activity any one individual imposes on data they consume and (re-)publish. He offers a conceptual framework for this which is interesting enough. I am much more concerned with what is possible now, without exploring all the detail needed for more advanced use cases.
    Extended Discussion of Business Case
    Why now? Who else is working in this area?
    Before we go any further this is the first question that has to be examined.
    Roughly, there are two approaches for entering into a complex domain like this, that is just to move in trying to create a presentable application (from what exists using a lot of glue) as a best first cut effort or to study the field and move as precisely as possible into an identified niche.
    Commercial Potential for Diaspora
    There is no doubt that commercial companies can miss opportunities, especially in such a vast, expanding area. Importantly, they can overlook some specific areas because they conflict with their main efforts.
    For any would be entrant pin point accuracy might be the best weapon in finding that niche.
    The following questions must be answered.
    1. Is there a commercial opportunity here?
    2. Why is this not being done or, actually, is it being done?
    It is not possible to give a fully comprehensive overview of the respective business models of the big players.
    The pertinent question is whether there is an alternative offering wanted by sufficient numbers that is precluded from being offered by those players by virtue of their business model.
    Here I take it as a given that technological commitments of companies reflect their business model and alternative technical solutions only make a company vulnerable if they represent an alternative revenue stream of some threat to the existing company.
    Is There an Identifiable Alternative Service Offering?
    What, actually, do people want? Short on research, and short on time to look at what ever research there may be in the pubic realm, I am going to posit an hypothesis:-
    A Simple Hypothesis Concerning Social Networking and Privacy Management
    The hypothesis is that while Social Networking offers its users something that they want, an ability to broadcast and be broadcast to, the means by which this is achieved mimics the feeling of having a non virtual social network. People find this confusing. This confusion manifests itself in various ways which I briefly explore.
    The network is called 'virtual', but there is something misleading in that term, as it is a network of real people. What is being examined is the way in which communication in these networks is mediated by the services providing the communication.
    I take the following two points as givens:
    1. Social networking media are used successfully to influence opinion and buying habits by advertisers using new techniques such as viral advertising and crowd sourcing, based on both mining of user behaviour data and targeting the release of associated information through various channels, including the social networking channel itself, aside from overt advertising. These techniques also include buying rank and position for products, that is in some way making them seem popular, favoured and successful. For example, in general terms this applies to the 'like' button of Facebook.
    2. Users of social networking services will project (or imbue) their network and its context with feelings that have to do with themselves and their social network. These feelings are capable of distorting their perceptions of the behaviour and role of their network of 'friends' and its context as a service of the provider.
    It remains to be investigated what it is that people actually object to about Facebook at the moment. (But see last section, The Facebook Response, where I briefly look at their immiediate response.)
    In terms of a set of feelings about FB I wonder which ones dominate. Is it a feeling of powerlessness? If so, why would that be different to feelings about a TV channel or a telephone service provider?
    My hypothesis is that the feelings are different to those expressed about other services and it is due to a combination of points 1. and 2. above in combination with the far less certain and less well known regulatory and normative rules framework governing internet activity. (That is the lack of high profile regulation in the area.)
    I posit that objections fall into the following categories.
    1. Data mining.
    2. Privacy of the personal.
    3. Data security.
    4. Revocation of statements.
    5. Data assuredness.
    6. Degree of Broadcast Propagation.
    7. Asset mining.
    8. Information gaming.
    9. Information accuracy.

    1. Data mining.
      Where the user goes on a page is monitored, possibly down to fine detail of how long the mouse hovers in a certain area, which buttons are selected and so on. All three aspects of the user journey is monitored:-
      When the user enters an area (landing page) from where they have come is monitored (start page) and when they leave the landing page the to where (target page) is also monitored.
    2. Privacy of the personal.
      As a logged in user there is very little information about the user that might not be associated with this information. What would not, or should not be associated with this information is the users name and contact details, or, of course, log in details.
      But certainly location, ethnicity, age, sex, sexual preference, marital or partner status, political opinion, economic status and many other details can be harvested (in one way or another and the methods are growing in sophistication). It is not associated with this particular user, in fact it is more valuable associated with a group of people. There are different ways in which such a statistical group may be arrived at. Similar behaviour groups users together into a statistical category group*. However, in the use of FOAF technology there is nothing that stops such a category group being formed from other friends in the network, in other words information about friends and friends of friends, but all the time anonymously.
      There is now an explosion in the data available to advertisers, the difficulty they now face is what types of data they require for the product or service they are offering. The line between targeted advertising and influencing group thinking has become very much slimmer as a result.
      Since many 'applications' that is the games etc popular on these sites, have the same privilages as the user using them they, too, would be able to access otherwise private information that may be introduced to them by the user. They may also take information from the page context in which they are being used.
      * I believe this is what is referred to by Facebook and others as a demographic. See my final post - part six - for an examination of recent moves and statements by Facebook as of 23rd May 2010.
    3. Data security.
      Security is a big word. It is quite clear from the above that data is not secure in that it is mined. Most contexts on a web site like Facebook are available for data harvesting. The way to demonstrate this is to search Google for subjects that have been discussed on Facebook. So the issue here is not just that the data is public (it has been indexed by Google, or anyone else who is interested) but whether the additional information mined, as described, is any more of a security threat to the individual, the subject of the data.
      It can be seen that security really is a concern about two aspects of social networking, one is the way anonymous data is used, the other is whether private data is actually secure (can not be seen by those who should not) and safe (has duration as expected by the user).
      If we accept that anything that would attach a particular user to usage patterns is intended to be kept private, is the system effective in this? The answer to this must be that it is as safe as the provider can make them. This aspect, at least, of web site management is legislated for and monitored. (Data Privacy Acts in the UK.)
    4. Revocation of statements.
      Sometimes users may create content that they wish to revoke at some future point. I haven't seen this to be particularly difficult in social media I have used. But there is the caveat that some streams of data are public and contributions to those streams do not retain the rights of the originator (to edit or delete).
    5. Data assuredness.
      Users may want to know that what they have created by way of content will be there in the future. When it comes to Facebook once the data is there it is difficult to move it elsewhere. It may not be possible to do so in a meaningful way since, in principal, some conversation threads will comprise items that others have rights over, such as the right to delete.
    6. Degree of Broadcast Propagation.
      Users may want a degree of control over how messages are broadcast, such as depth, extent and over time. Moreover users may not want publicised content to be republished or amplified.
      It is probable that Facebook does satisfy items 3. through 6. to some extent. (I am not going to become an 'expert' FB user to test this out.)
    7. Asset mining.
      Data gathered in the way outlined above is an asset, that is different aspects of the data matrix are given value and sold to advertisers. This would be in a process much the same as the way that a page of newsprint or a time slot of TV is sold to advertisers. There may be secondary markets as advertisers pre-book or attempt to monopolise certain types of data, by location, time frame or other combination of criteria. While this process is hidden from the user, is there any objection to it? How is it different to more traditional media, such as TV advertising and product placement?
      These questions are quite fundamental.
      For instance do users want an advert free service, or do they want one that conforms to, as yet, undefined rules?
      Moreover, just because I use a web site does this mean that how I use it is my data, owned by myself, rather than usage data owned by the service I am using?
      It would not be possible to run an effective service without some usage data, and a great deal that users expect from Facebook would not be available if data access for the purpose of gathering usage data were very restricted.
    This will apply to Diaspora too. They will need to know what works and what doesn't, how to tweak things. That is best done on the back of usage statistics. It is just that in the case of FB, along with usage statistics there is a super set of data that is gathered expressly for the benefit of advertisers (analytics), which is common practice in many web sites, including Government web sites and, possibly, University web sites.
    If there is an objection here it would be better to be clear what that objection is, for instance would it be the extent of the analytics gathered, the way they are gathered or the huge population over whom they are gathered? 8. Information gaming.
    This activity is very similar to product placement. It should be pointed out that only recently have the rules on product placement in the UK been relaxed, and, at that, not to the extent of what is common in the US. Needless to say there are no rules governing this aspect of commercial behaviour on the Web.
    Information gaming is where some category of information has its value or status inflated by some process based on analytics. A typical example would be where favourite tunes are derived from user usage patterns and a music publisher is allowed to associate other works with this result by placing these works along side the returned result. Clearly this strategy may be more effective in some contexts than in others. For example, it may be found that because of the restricted space on a mobile device more impulsive buy behaviour is induced to purchase the side by side item.
    I would categorise this as a common advertising ploy and include the way Facebook applications are targeted at particular users as an edge case. 9. Information accuracy.
    This is a very broad area.
    This includes anything from masquerading an identity to falsifying data.
    Examples would be where a person goes on line pretending to be someone who doesn't exist with an implied connection with someone who does (I have examples, is this wanted or unwanted behaviour?) More extreme would be masquerading as another person. This is identity theft and could do a lot of damage to the real person without gaining access to their account. (I have also heard of examples of this in the non virtual world. Legal authorities were uncertain if they had power to act and did nothing for many years. The consequences were very disturbing for the imitated individual.)
    As to falsifying data, I am uncertain what the protection against this might be apart from what happens now, which is that claims get exposed. Perhaps in the virtual world, where claims are very transient, this may be more of a problem, e.g. a music band that polls number one or something?
    I believe there is ample opportunity for data falsification of this nature, that it does happen and is an increasing risk, but my evidence is slight and not at all associated with Facebook.
    The Parallel With Broadcasting
    There is much that might be learnt from the evolution and regulation of broadcasting in the UK (both commercial and public). It is quite certain that public broadcasting, despite the acceptable climate it generates for its services, will have little influence over the future of the issues being discussed here.
    Broadcasting Myself
    All of the above has to be taken in the context of the rather immodest desire of people (including myself) to share and broadcast themselves.
    Some of these points are taken up in a mild way in this series of blog posts on the subject:-
    Why we share: a sideways look at privacy
    Here the author http://confusedofcalcutta.com summarises and quotes another author Danah Boyd
    • We must differentiate between personally identifiable information (PII) and personally embarrassing information (PEI).
    • We’re seeing an inversion of defaults when it comes to what’s public and what’s private….you have to choose to limit access rather than assuming that it won’t spread very far.
    • People regularly calculate both what they have to lose and what they have to gain when entering public situations.
    • People don’t always make material publicly accessible because they want the world to see it.
    • Just because something is publicly accessible does not mean that people want it to be publicized.
      Making something that is public more public is a violation of privacy.
    A further point made by Danah Boyd is that:- Fundamentally, privacy is about having control over how information flows.
    Conclusion
    Facebook have no manopoly over the means by which social networking may take place nor the desire of people to share. A huge user base is attractive to advertisers and may, actually, be stimulating to users, what with the buzz of the crowd and highly targeted advertising feeling like attention being paid to the individual. Powerful ingredients. It remains to be seen whether there is actually a great demand for an environment far more under user control than could possibly be offered by Facebook becuase that degree of control would conflict with their revenue model. There is no reason why Diaspora should not offer advertising as well. The issues are slightly complex but it could be that each individual could opt in or out at will. Ideally that should not be a cost to Diaspora: Diaspora is far less expensive infrastructure than Facebook.
    I discuss these points in my further posts.
    Resources
    0: Group:GNUSocial/ProjectComparison
    1: "a-flock-of-twitters"
    2: confusedofcalcutta
    3: Danah_Boyd
    Adam Saltiel
    May 2010
     

    Published on Friday, May 28th 2010. Edited by Rat Outzipape. tag

    Comparison With The Wider Technical Community
    Does the Proposed Service Conflict with Existing Popular Services?
    The 'Diaspora' System in a nutshell
    The last point made by Danah Boyd about control over information flow is truistic. But a service that concentrates on addressing the issues of how to control information flow is certainly different to what we have at the moment.
    In the sense of design philosophy, it does conflict with what is offered at the moment. As such a system evolves (as a result of user feedback however generated) this will become increasingly apparent. I believe that the defaults of Diaspora will revert to what was the norm prior to intense targeted marketing.
    I assume the goal is of having user control over each piece of data or communication to expand, contract or remove access and to edit, version or delete as ownership and system constraints allow, whether flowing to the individual or flowing from the individual.
    There are three distinct points of departure from Facebook in the envisaged architecture of Diaspora.
    1. Facebook could not offer full security of the type possible with Diaspora, its architecture probably would not sustain this, or do so with difficulty.
    2. Another consequence of Facebook architecture is that a huge amount of traffic is going through the same domain, this means that the autonomy of each Facebook profile does not exist apart from through the Facebook super domain. This is both a network issue (I understand that it is technically 'unhealthy' but lack details for this assertion) and contradicts one of the basic principals of the design of the internet, that each item (page) has a transparent and reliable identifying address (URL). This second point is a bit technical. It pertains to the ease with which Facebook may exchange data with other applications (while also respecting defined privacy).
    This is not possible with Facebook, while I expect it will be intrinsic to Diaspora.
    3. The Diaspora architecture is intrinsically less expensive to maintain. Without the centralised architecture there is no need to create such massive revenue streams to maintain and show profit from infrastructure.
    The Wider Technical Community
    W3C Initiatives
    Casting our net for further guidance the W3C has several initiatives in the area we are interested in. Parts of most of their work intersect with our concerns.
    It should be noted that, to my knowledge, W3C work is not based on 'customer' surveys.
    W3C is well named, it is a fee paying consortium. It is based on polling interested parties, usually those from academia and industry who can give sufficient sponsorship to individuals to carry them through the writing and presentation to conference of papers and steer recommendations through different stages to acceptance.
    However, for our purposes, the consortia structure works in reverse: We can use what surfaces in W3C as a measure of the concerns of different types of internet user.
    It is also important to note that W3C has huge reputation but does not have any legal powers to impose recommendations or standards. W3C make recommendations on the basis of consensual committee agreements (how ever achieved, there may be a voting system for those with a registered interest). Sometimes those recommendations languish, or or ignored by the wider technical community. (This has happened often, providing some note worthy historical cases.)
    One point to be made here is that, to my knowledge, in the UK, one of the largest users of IT that also actually intersects with much of W3C work, the UK Government, has not introduced a program of evaluation and adoption as a series of contractual obligations with it suppliers.
    In other words, W3C can be circumvented in the implementation domain on a grand scale. As I will show later, when discussing the lack of standards that apply to the information domain, the behaviour of government as an influential lead body does have an impact on us.
    W3C has done a lot of work in the area of privacy, Intellectual Property (that is concerning patents of solutions presented to W3C which are licensed on royalty free terms imposed on members of working groups), DRM, policy management and other.
    Here I am going to narrow down my exploration by concentrating on one authentication and trust framework solution called FOAF+SSL. This technique has been extensively researched by the Social Web Architect, Henry Story. I build out from this, mentioning how it contrasts with other different proposals and measures.
    I explain this in the section Architectural Objectives below.
    The W3C Work on Privacy
    P3P is The Platform for Privacy Preferences, do not confuse it with p2p which is Peer to Peer networking, also mentioned in these posts.
    What follows is quoted from these materials:- 2002-04-16
    The Platform for Privacy Preferences 1.0 (P3P1.0) Specification
    Group Notes
    2006-11-13
    P3P really has become the mechanism by which web sites inform their users of how they use or intend to use their data. It has come to be restricted to the policy on sharing user addresses with other parties and so forth, but it's original intention was much broader in scope.
    From The Platform for Privacy Preferences 1.1 (P3P1.1) Specification :-
    1. Introduction
    The Platform for Privacy Preferences Project (P3P) enables Web sites to express their privacy practices in a standard format that can be retrieved automatically and interpreted easily by user agents. P3P user agents will allow users to be informed of site practices (in both machine- and human-readable formats) and to automate decision-making based on these practices when appropriate. Thus users need not read the privacy policies at every site they visit.
    In Looking Back at P3P: Lessons for the Future, November 11, 2009, Ari Schwartz from The Centre for Democracy and Technology says:-
    Although P3P provides a technical mechanism for ensuring that users can be informed about privacy policies before they release personal information, it does not provide a technical mechanism for making sure sites act according to their policies. Products implementing this specification MAY provide some assistance in that regard, but that is up to specific implementations and outside the scope of this specification. However, P3P is complementary to laws and self-regulatory programs that can provide enforcement mechanisms. In addition, P3P does not include mechanisms for transferring data or for securing personal data in transit or storage. P3P may be built into tools designed to facilitate data transfer. These tools should include appropriate security safeguards.
    The following shows part of the W3C specification definition and its modification by a later note:-
    1.1 The P3P 1.1 Specification
    The P3P1.1 specification defines the syntax and semantics of P3P privacy policies, and the mechanisms for associating policies with Web resources. P3P policies consist of statements made using the P3P vocabulary for expressing privacy practices. P3P policies also reference elements of the P3P base data schema -- a standard set of data elements that all P3P user agents should be aware of. The P3P specification includes a mechanism for defining new data elements and data sets, and a simple mechanism that allows for extensions to the P3P vocabulary.
    1.1.1 Goals and Capabilities of P3P 1.1
    P3P version 1.0 is a protocol designed to inform Web users about the data-collection practices of Web sites. It provides a way for a Web site to encode its data-collection and data-use practices in a machine-readable XML format known as a P3P policy. The P3P specification defines:
    * A standard schema for data a Web site may wish to collect, known as the "P3P base data schema" (5.5)
    * A standard set of uses, recipients, data categories, and other privacy disclosures
    * An XML format for expressing a privacy policy
    * A means of associating privacy policies with Web pages or sites, and cookies
    * A mechanism for transporting P3P policies over HTTP
    The goal of P3P is twofold. First, it allows Web sites to present their data-collection practices in a standardized, machine-readable, easy-to-locate manner. Second, it enables Web users to understand what data will be collected by sites they visit, how that data will be used, and what data/uses they may "opt-out" of or "opt-in" to.
    From P3P Specification Note
    The W3C Work on Privacy
    Privacy Bird
    This is a W3C tool used to filter browsing of other web sites, it is a filter of information coming in, not of user information going out:-
    The Privacy Bird will help Internet users stay informed about how information they provide to Web sites could be used. The tool automatically searches for privacy policies at every website you visit. You can tell the software about your privacy concerns, and it will tell you whether each site's policies match your personal privacy preferences by using bird icons.
    Privacy Bird
    The W3C Work on Privacy
    Protocol for Web Description Resources (POWDER)
    W3C Recommendation 1 September 2009
    This recent recommendation has great relevance to our present purpose. The recommendation is the subject of ongoing usage and implementation research. Notice "publication of descriptions of multiple resources" which is essentially a Semantic Web action, and difficult to achieve without using that technology. Facebook, as it is constructed, would find it difficult to comply with this recommendation and for that reason it is referred to as a walled garden. There is no way of understanding what is inside from outside, nor accessing it in a consistent manner (despite it being indexed and mined for analytics). Advanced implementations of foaf+ssl that I am advocating here, are designed exactly for this purpose.
    The Protocol for Web Description Resources (POWDER) facilitates the publication of descriptions of multiple resources such as all those available from a Web site. These descriptions are always attributed to a named individual, organization or entity that may or may not be the creator of the described resources. This contrasts with more usual metadata that typically applies to a single resource, such as a specific document's title, which is usually provided by its author.
    From POWDER 2009
    (Example) Use case
    2.1.8 Child protection B
    1. Thomas creates a portal offering what he considers to be terrific content for children. He adds a Description Resource expressing the view that all material available on the portal is suitable for children of all ages.
    2. Independently, a large content classification company, classification.example.org, crawls Thomas's portal and classifies it as being safe for children.
    3. Discovering this, Thomas updates his Description Resource with a link to the relevant entry in the online database operated at classification.example.org.
    4. 5 year old Briana visit's the portal. The parental control software installed by her parents notes the presence of the Description Resource and seeks confirmation of the claim that the site is child-safe by following the link to the classification.example.org database, which her parents have deemed trustworthy.
    5. On receiving such confirmation, access is granted and Briana enjoys the content Thomas has created. From POWDER 2007
    Resources
    0: Danah_Boyd
    1: FOAF+SSL Alternative Implementation
    2: Henry Story
    3: P3P 1.1
    4: The Centre for Democracy and Technology
    5: P3P Specification Note
    6: Privacy Bird
    7: POWDER 2009
    8: POWDER 2007
    Adam Saltiel
    May 2010
     

    Published on Friday, May 28th 2010. Edited by Rat Outzipape. tag

    Extended Discussion of Security, Privacy and Trust
    Security, Privacy and Trust
    Security and Privacy
    I distinguish between security and privacy. Security is one means by which privacy is obtained but does not create it. A reasonable form of privacy can also be obtained without extreme security measures.
    Security can be visualised as on a sliding scale. The most extreme is what I will call paranoid. If I were a student from Tiananmen Square through Tehran to Bangkok that is the sort of security I would want on my mobile and blog.
    Two Common Negatives
    One argument against giving people greater powers of privacy is this:-
    This might make it more difficult to intercept criminals engaged in various nefarious activities.
    I am aware that this is a common concern and raised it with Henry Story just recently. He pointed out that groups of people, who are citizens of the larger society, are responsible for themselves and can be self policing. At least in the main. I expect that policing authorities would be more concerned about cells (political, criminal or terrorist) but I really have to draw the line here at what I am competent to discuss.
    I will have to say the same about another common objection, that this might be popularising tools that enable and encourage Copyright infringement. This encouragement, anyway, already occurs on a much larger scale, for instance by Google, as I point out elsewhere.
    To clarify, there are different types of security and privacy that the Diaspora application could offer.
    Security
    The greatest security would be achieved by having all data encrypted wherever it is stored and encrypted when ever it is transported across the public internet. Since trusted users (the data creator and others) must have access to the same data items both locally and across the internet the means by which the data is decrypted could be the same in both situations. Nevertheless, the encryption of all data is an extra burden that would have implications in different parts of the Diaspora system.
    Two terms are introduced. End to end encryption and group encryption.
    End to end refers to the SSL certificate authentication, as might be used by a bank where crucial data is being sent to the bank in encrypted form. Usually only certain data is sent in this way. But the mechanism offers and other systems might use encryption of all communications, for instance encrypted email. Group encryption is where access to a domain is always encrypted. Typical use case here is VPN, where companies assure access to their own intranet to employees accessing it over the public internet. Some reasonable decisions must be made about what is needed here. Encrypted email exists for the situation where the traffic to a known domain is of interest to intercept. This is two things, the traffic is of interest and the destination is known.
    While just about all the data flowing in and out of Facebook could be intercepted, since it is a series of very well known destinations, there is a certain safety in numbers. Any one piece of data is likely lost in (although in principal recoverable from) the general noise. In more specific intercepts user domains are needed. (Presumably these are readily available by one or another means.)
    In the case of a more distributed system intercepts would have to follow on the more difficult to find user domains.
    So it is true that if further, near absolute, security is required all data would have to be encrypted, perhaps as a user choice.
    This is less of a lightweight process though.
    Privacy
    In my Simple Hypothesis above I state that there is a gap between user expectations and perception of the service they are using which is a product of the type of service being used and the way the service provider gains its revenue.
    Privacy, on the other hand, can be satisfied by having easy to configure controls, essentially these are read and write controls over content that travel with that content irrespective of context. This is a series of issues quite separate from security apart from access to the privileges to change read write status of a content item.
    Trust
    In the Diaspora architecture it can be seen that the possibility of amalgamating two or three broad category approaches is being considered.
    These are the Open Profile / Browser Certificates approach with the Federation of Servers approach and Peer to Peer by allowing for servers to run on the user's computer. The two issues that this would address are:-
    a. By putting a certain degree of trust into servers, these become unencrypted trust networks, in addition are the SSL keys considered safe on servers for the future?
    Explanation of a.
    Unencrypted trust networks. This means that various servers in the federation hold various amounts of data about the users of the network.
    Aggregating this data might have great value, for instance for an unscrupulous business or determined government.
    There would be no protection against this as the measure of protection that encryption might afford in such circumstances could not be enforced or might be revoked if it existed, by one or more of the federated servers.
    The safety of SSL keys refers to the private keys held on behalf of distinct user entities. The question is how safe is the commodity service being relied on at this point.
    b. Even if only encrypted messages are transported and stored, the social graphs would still be entrusted to remote servers.
    Explanation of b.
    This shows both what is being considered in connection with trust of external servers and that peer to peer is considered the highest secure solution. To achieve the highest degree of security all data must be encrypted where ever it is held and in transport, or, next rung down, be encrypted as it is sent over the network.
    The social graph refers to the relationship between friends, items and the history of this interaction.
    A Reasonable Question About Capability and Capacity
    It would seem (in the common perception) that only very big services might be relied on for seamless storage over time. Here the assumptions are that they have the resources to tend to the infrastructure, and a reputation they wish to maintain. Powerful drivers to maintain the offered service.
    Again it should be noted that there may not be any contract between provider and service consumer of this nature.
    The Diaspora distributed way of guaranteeing capability and capacity is to rely on several nodes, replication between nodes and careful engineering of the relationship between on line and off line nodes in the context of the information that should be delivered to each node (public notices and FOAF relationships plus privacy constraints).
    Note the recurrence of the use of the FOAF profile. This is another powerful reason to create an architecture that uses FOAF directly and would be able to exploit its potential as a Semantic medium.
    Separating the Profile from the Social Graph
    Profiles should be conceived of as a set or rules that allow for dynamic negotiation between different particular profiles into a profile set.
    Initial implementation can be straightforward and should just take account of access controls. Later semantic reasoning tools can be applied to the data set for more expressive results.
    The Security Context of the Social Graph The social graph must be available for read and write according to a schema which is the intersection of profiles as controlled by the principle, this user.
    Resources
    1: Henry Story
    Adam Saltiel
    May 2010
    top