Producing Derived Data From Mobile & Social to Support Orientation, Decision, and Action

Inspiration for this post comes from the below industry-analyst quote that misses the structured-plus-unstructured-data mark:  

Progressive Insurance, a leader in the property and casualty insurance industry, for example, was able to profit from combining structured data (customer and claims information) and unstructured data (driving behavior captured via sensors), resulting in a new usage-based insurance (UBI) program that has reached more than 1 million customers who are saving more than $125 million on auto insurance, while generating more than $1 billion in revenue.

While great to see Progressive's success harnessing the power of connected Mobility, Location, and Customer Data, customer & claims data and automotive diagnostic data (captured by any sensor (OBD or otherwise)) are both quite organized & well structured and thus easy to bind to produce new driving behavior insights. These data blendings and types of use cases do not adequately represent the problem (and thus the opportunity) of combining structured and unstructured data to produce deeper and broader business intelligence insights.    

These latter insights today combine modern mobile & social platform generated free-form texts, multimedia like images, videos, voice recordings, etc., and other trapped-in-IT-yesteryear-land data documents with structures that help make sense of all the unstructured gobbledegoo. If you put the gobbledegoo in a blender, you get mud smoothie. Crud in = crud out. It might look pretty in some cases, but it's not very tasty. Honestly, who cares though how nice it looks. It should produce new derived data that in turn supports a quicker orientation, decision, and action cycle, right?.  

Some attest the trick is to make Time and Lat/Long Location the unifying attributes that link mobile & social gobbledegoo together. All you need to do is capture this data and throw scalable parallel processing infrastructure and iron at it to handle, and process firehose velocity and volume. From here it's easy to produce map snapshots around static events in space.  If this is the goal - to amaze with map eye candy - great. However, if the goal is to produce new derived data that in turn helps answer ongoing questions, by offering new insights to support ongoing orientations, better decisions, and more thoughtful, calculated, or competitive actions, then I tend to think a better approach is to take the unstructured gobbledegoo and wrap other structured orientation around it already linked to a structured (read understood | oriented) world. From here, it's relatively easy to extend and derive; the subsequent challenge becomes understanding the newly discovered orientation, then creating new value from the derived insight, and next repeating the learning cycle again and again. Timestamps and Lat/Longs on a map don't do this. They exist in time, usually in the past, and show you attractive spatiotemporal pattern visualizations of what was once known. Cute. But it stops there.