Is it possible that the traditional ERP vendors may lose their dominant positions in the mid and large size enterprises because of cloud computing and what it enables – notwithstanding their own efforts to exploit the cloud.
Seems to me that the cloud enables business managers to demand a different experience of implementing information solutions to support their businesses. There is an emerging demand for simpler, faster, cheaper implementations – potentially not built on one integrated solution from one ERP vendor. And this may work well for the implementation partners also. Ultimately they may be required to work off a reduced margin – but this may be achieved for significantly reduced investment and reduced risk of failure.
Excellent piece recently in CIO dealing with the future of ERP. The piece does not purport to have all the answers – but certainly speaks to the challenges being faced by traditional vendors and the opportunities for those with solutions built for the cloud.
data dot gov dot uk is about to become a reality. Tim Berners Lee and Nigel Schadbolt cover this off in their article, ‘Put in your postcode‘, out comes the data, in The Times 18/11/09.
The UK government is moving forward on a similar basis to the US government – in making public data available to the public.
Curious to see how far advanced we are wrt implementing something similar in Ireland – in the context of our knowledge society and smart economy. Must make sense to make this type of information available – as argued by Tim Berners Lee in the referenced article.
Three different examples recently reported of use of semantic web technologies to improve online advertising efforts.
OpenAmplify is a web service developed by Hapaxthat brings human understanding to content. Using patented Natural Language Processing technology, OpenAmplify reads and understands every word used in text. It identifies thesignificant topics, brands, people, perspectives, emotions, actions and timescales andpresents the findings in an actionable XML structure.
NEW YORK – ad pepper media, the international online advertising network and semantic advertising technology solutions provider, launched the SiteScreen for Agencies platform, enabling advertising agencies to apply its ground-breaking SiteScreen semantic brand protection technology across their entire range of online media buys to effectively prevent ad misplacements.
In Italy, Quattroruote is a leading online magazine for car aficionados and buyers, with its reputation built on testing and evaluating models and its own blue book-like price estimates for vehicles. Now it’s a leading-edge user of semantic web technology, too.
It has deployed Expert System’s Cogito semantic solution to help add value to user searches for used cars in its portal to the world of classified car sales.
There is a great deal written about web 3.0/ semantic web in terms of knowledge and intelligence. Much of it relates to computers being able to process data published on the web and ‘understand’ it – either via Natural Language Processing type solutions or through markups such as Resource Definition Framework (RDF).
This piece of research being conducted by IBM reminds us of the competition – the human brain.
For now I see the real benefit of the semantic web being to give me some assistance in terms of processing the vast amount of data which is available on the web (and within enterprises – under linked open data initiative). For instance, if in going to a meeting to discuss evolving health & safety issues in the construction industry in Australia, I have a piece of software which can filter/find/ summarise much of the information and data in the public domain then my contribution to the meeting may be more valuable (or my preparation time may be accelerated). Again, within the context of semantic web, my profile – if I have an interest in such a field – should result in my being prompted with relevant information. This ties in with Kevin Kelly’s dictum, ‘No personalisation without transparency’.
Find myself being asked more regularly to explain ‘the semantic web’. I think it’s a combination of a growing awareness in the business community of the semantic web and a greater focus on this topic by myself.
Read a piece this morning on the hypios web site – a web 2.0 based problem solving site. In the first page of this essay the author offers an excellent introduction to the semantic web (and the requirement for a semantic web).
The only reservation I would have would be the ‘plea’ to business to make more data available publicly as linked open data. I agree with the sentiment – but not sure that business on such sentiment.
To some extent I think Tim Berners Lee may almost be a victim of his own success. Seems to me his initial guidance to government (and others) was to get on with making the data available (at that time he was not stressing the need to provide the data in RDF format). Now that data.Gov has provided data TBL and others are understandably pushing that the data be in RDF format – to enable linking of the data.
Obviously we,promoting things semantic, want the data to be published and easily linkable. But sometimes, as per Paul’s posting, I think we make it all look a little more confusing than necessary, by ‘mashing’ (apols for pun) the terminology.
I referenced recently Tim Berners Lee’s encouragement to everyone looking to publish linked open data to use the Resource Definition Framework. I also referenced in this blog recent work completed by the New York Times in this field. The New York Times initiative has attracted an amount of comment in the technical community identifying the teething issues/ errors in this data as published.
Stefan Mazzocchi’s recent post, Data Smoke and Mirrors, speaks to some of the issues associated with publishing lots of linked data using RDF. Stefan has reviewed a triplification of all the data from data.gov – and has been left somewhat bemused. The posting itself provides some examples.
The point here is that we want to see the data published, we want to see the standards used – but it’s far from simple and publishing for the sake of publishing or triplifying for the sake of triplifying may be self defeating. As a community we need to focus on quality and the end user of the data.
In today’s Irish Times I read a commentary on the speech by David Guinane, at last night’s Institute of Bankers’ Dinner. His comments included: ‘As bankers, we must recognise first and foremost that this crisis has been caused by the failure of our sector to fully understand and manage the risks inherent in our business‘.
This type of sentiment – expressed publicly – is part of the required social and economic reconciliation process. Serious mistakes were made by bankers – for a range of commercial reasons. Others were not innocent – those who got caught up in various ventures, those who adjusted the basis of the country’s finance, those who failed to implement rigorous regulation. Some may have overstepped the mark completely.
It is important that groups acknowledge their mistakes (and any wrongdoing where it took place). There are many who could follow Mr Guinane’s line – it would greatly assist the reconciliation process.
Now we need to focus on learning the lessons, taking the corrective actions and reforming as a team. There are some signs of this – but one should not underestimate the anger of those marching yesterday. The reconciliation process to date has been inadequate.
This makes the data more useful. You can now cross reference/ correlate the NY Times information with other information available on the web e.g. DBPedia (The RDF format of Wikipedia). You can also develop applications which can access/ process/ interpret the NY Times data – because it is provided in RDF format.
Interesting development – and makes sense of the Linked Open Data initiative. The NY Times is embracing RDF – to some extent it is giving away its data, but on the other hand its own data is far more valuable because it can easily be combined with other (RDF’d) data.
Quite a challenge to all organisations – especially those generating significant content – who are failing to have their data leveraged properly because it sits in its own silo.