The biggest takeaways for me were in defining how web semantic technologies can be split into two camps; little "s" and big "S".
Semantics is all about the process of understanding words. I might define the semantic web as the process of building a website that describes content in a format so that technology can read that data and immediately understand the meaning of that content for another use. In addition web semantics includes technology that is able grab content and give you an easy way to develop an understanding of the meaning of content on a website.
Machines unlike humans don't understand the context of words, with a language there are so many different combinations you can create that the meaning of a sentence is meaningless to a machine without understanding the context of the content.
For example, I might say I am trail blazing in the world of technology, but to a machine that may mean I'm building a trail, putting out a fire or leading the way to the future of technology. Context is everything in language, you either have to provide an explanation of the context of content when you publish a website, or have a mechanism for understanding the context over time. Human intervention is needed in both circumstances. You can have to have someone describe published web content's meaning in a format that's agreed upon prior to a technology capturing the data from a website, or have a model for what content means supported by people once data is captured from a website.
Little "s" web technologies capture and filters data with no description or understanding of the data provided after the capture process. The process of understanding the meaning of that data starts once data capture has happened. People have to intervene to provide the content and meaning for language on the web.
While big "S" web technologies provide a framework for describing data on a web page when the data on the website is published. If data is read or captured, because the data's semantic meaning has already been described, you don't have to go through the process of understanding the meaning of the data.
Currently the semantic web is embryonic webmasters have not built websites with semantic data. The next stage for the web is building big "S" websites where the data is already described within an agreed upon standard. Although one of the panelists, Sean Martin, CEO of Cambridge Semantics, a semantic consulting and semantic middleware provider, believes that in the last year we are just starting to see some useful implementations of big “S” semantic web projects.
One important insight I gained tonight was that data that is gathered and defined using little "s" web technologies could then be used for data manipulation by big "S" technologies.
Mike Spataro works for Visible Technologies, a social media monitoring and research company. Visible uses text analysis and natural language processing to capture content in social media, identify content, and enable Visible Technology clients to use their technology to interpret the meaning of content through an intuitive process. Visible Technologies captures data, and asks clients about the context of the data captured, over time the process is repeated, each new webpage of data builds the library of meaning, clients teach the Visible semantic machine what's relevant to them and what’s not.
The technology goes one step more, and can route opportunities found in social media to individuals within the enterprise. Basically visible technologies has built a technology to mirror the process of monitoring, triage and response developed by early pioneers in the social media engagement industry such as Dell, Microsoft, Comcast or IBM. In fact the social media monitoring industry and the CRM industry are converging. An opportunity found within social media can now be responded to by a company employee, but if a company has existing knowledge of a customer within their CRM, that customer data can be tied to the triaged information found by monitoring social media.
As companies build more websites with semantic metadata describing the context and meaning of the data, companies like Visible Technologies will be able to combine information caught using little "s" technology with big "S" technology. I suspect few customers will take the time to describe the context of a complaint, but a customer might spend the time on describing their overall background and identity, especially if such semantic data helps with search engine rankings, or in providing data to other semantic engines.
Sean Martin was amazing during the discussion he provided a clear explanation of how semantic technologies work; as he is in Boston I hope we can collaborate in the future and put on some more events together. Christine Connors, Principal of TriviumRLG LLC, a Semantic Technologies consulting company gave us a lot of working examples especially as she has used semantic technologies in her prior roles at Raytheon, Intuit and Dow Jones, I very much appreciate her driving up from New York to be on the panel.
Several audience members contributed questions, comments and ideas. In addition to asking the panel questions, I also opened up those same questions to the audience, and a number of audience members described their experiences using semantic technologies for product development. Not only did the semantic technologies help with the development of a product, but the process of using semantic technologies helped them manage projects, and also made it easier to switch from vendors within a software project because the structure of their data is described using open semantic standards.
Lastly, both Sean and Christine commented on the announcements by Google that semantic data will be considered by the search engine for search results. Sean suggested that Google could be a significant driver in the adoption of building semantic data using open semantic standards for the purpose of getting an edge in search engine optimization. With my background in SEO I have an additional reason to continue exploring the future of the semantic web.