Iteration as Evolution

In Sheffield I once saw a selection of table forks in a museum. They were arranged over time, maybe 200 or 300 years of forks. From the left to the right they evolved from long two-pronged meat stabbing devices in to the short four-printed fork we know today.

To deal with a sump pump issue I had need to make a 1 1/4 inch pipe connect to a US garden hose. Such a thing didn’t exist on Thingiverse as far as I could find, so I hacked one together.

Here’s versions 1 though 6 of that connector. What’s interesting to me is that this is how I, and I think most people, actually build things. In a Taleb-esque random search rather than any real top-down design that JustWorks(TM) the first time you try it. It’s the messy real-world process of learning-by-doing.

  1. Version 1 on the far left was from Thingiverse. It assumed the hose had no screw connector and instead would be jammed inside, which was wrong. The pipe on the bottom was too narrow to actually fit in the 1 1/4 inch pipe.
  2. V2 was made with the OpenSCAD thread library. The thread part was too narrow because I measure the wrong thing. The pipe was too fat because it conformed to the outer, not inner dimensions of the pipe.
  3. V3 fixed the screw top, nearly. It was still a little too narrow, maybe due to plastic oozing effects or something. The bottom was too narrow because I mixed up radius and diameter.
  4. V4’s screw was perfect, but the bottom was too thick due to plastic expansion from the printing process
  5. V5, same problem
  6. V6 I shortened it a little and use some plumbers tape and it worked great

Each iteration took about an hour to print and test over a few days. You can download it at Thingiverse.

It’s better to just START and iterate than to sit around thinking about problems.

3D Printing Bar End Plugs

Continuing the theme of 3d printing… One of my bikes lost its bar end plugs, the holes on the ends here:

To me, these looked approximately like two circles joined by straight lines. Or, sort of an egg-shape. Enter 3D printing! Here’s my hacky model:

And here it is printed:

And two of them, one white, one green, on the bike:

This is sort-of pointless-yet-fun 3d printing, which so far has been the majority of it (the exception being the lever).

All the files are on Thingiverse here.

Access to Tools

This YayLabs Play and Freeze Ice Cream Ball Ice Cream Maker has two identical screw lids on opposite sides of a sphere. In one you put heavy cream and whatever else you want in your ice-cream. In the other you put lots of ice and rock salt. The salt lowers the melting point of the ice, dragging the cream colder than where water would otherwise freeze. You roll the ball around and after 10 minutes or so, have ice-cream.

The problem is that due to various thermal effects the lids get very stuck. Like, hitting-them-with-a-hammer-doesn’t-work stuck. My first theory was to make a cylinder with a notch in it and hope that provided enough grip top open the thing. So I designed and then 3D printed this in 30 minutes:

Which goes on like this:

And of course, that didn’t work. So I added a handle for a lever using OpenSCAD:

Then put it through the printing process using Simplify3D:

Ending up with a basic lever tool:

Which works great!

This isn’t solid plastic. Simplify3D prints a three-dimensional crosshatch mesh inside the solid and it apparently has more than enough strength to open the ice-cream maker. It took about 5.5 hours to print and a few dollars of plastic. I was hoping that some similar tool already existed, with a variable width on the notch piece, to open things like this (or CameklBak’s for example!) But, I couldn’t find anything with my various searches.

I can’t help but think back to The Whole Earth Catalog – Access to Tools:

Making your own tools is a powerful experience, physical tools like my lever thing or ephemeral like software. Try it.

It also made me wonder if this is approximately how things will work out when we get to Mars – shipping lots of 3D printers instead of parts since the cost of delivery will be pretty high and we don’t really have a clue what they’ll need on the ground.

The total design time was about 10 minutes, and the code is all static numbers which reflect the rough measurements of the lids. It helps if you already know some geometry and have the OpenSCAD cheat sheet open.

Someone could take it and make it a more generic tool for various lids by putting different cylinder-notch combinations down the length of the tool, instead of just one at one end. It could also use both sides. Or, various cylinders with a notch on top to take a lever. The lever could be printed or use some common size, like a screwdriver that you could slot in to use as a lever.

The obvious thing to do would be to print a nut in to the top of the cylinder so you can use a wrench. I don’t think it’s very likely the 3D printed plastic would handle the stress of that however. Presumably things like this exist in metal already somewhere, and we can print in various metals today too.

The length of the lever is about the limit of what my printer can print – about 20cm or so, which is less than a foot.

The files are all here on Thingiverse.

Things Get Weirder The More You Study Them

I’ve been struggling to articulate my problem with science with some friends. Not in the sense of chemtrails or the modern world being inherently bad or something, not the idealized science that exists in peoples minds. The problem with real science as practiced by human beings. I have three lines of problems with any science outside experimental physics where there’s an actual reality to test things against.

My friends, to set up a straw man, believe in diligent, hard-working and often well-paid scientists. They possibly wear white lab coats and run experiments. There’s a selection process which somehow funnels the best scientists to the best problems where they Learn Results. These Results are then disseminated to the populace so we can all live better.

So, my problems with this:

First, it’s a giant circle jerk. Having worked in academic environments I’ve seen firsthand how much BS is produced. Most published papers are now not cited by anyone at all, ever. It’s become a write-only medium. So we can throw away 80-95% of academic output and on one level this is fine, it’s okay to frame academia as a place to experiment with a low chance of success. Almost no science studies are double, triple or quadruple blind which is what it would take to actually prove something tentatively in some small domain.

Second, Kuhn. I’ve seen up close highly paid smart people not see the wood for the trees. We have to wait for people to die for progress to happen.

Third, scientism and the application of science in the wrong places. Scientism is where we make things look science-y because of reasons. The application is much more insidious. Consider type-2 diabetes. We study the heck out of it and have scienced our way to artificial insulin which is great for T1 diabetics. Think of all those highly paid and smart researchers figuring out how to make insulin and getting past the FDA. The years and billions of dollars. But for T2, it just slowly kills you. It has enabled a vast number of people to begin and then keep their diabetes rather than solve the problem which is high insulin. Dr Fung points out the insanity of treating high insulin with more insulin, and the first sentence of his first book is “why are there fat doctors”? After all, doctors are smart, highly motivated, diligent and well paid so you can’t just say it’s bad morals, lack of information or laziness.

Today I caught this study about how much Titanium Dioxide diabetics have in their pancreas. Great work, good for them. But there’s something wrong. Again we have smart, motivated and paid researchers off studying some third or fourth-order effect instead of trying to fix the basic problem of diabetes. That problem happens to be also the biggest problem in retail medicine today – obesity predicts almost everything about your health outcome and we’re all obese or nearly there.

To avoid this human problem, we need to keep asking the five whys.

I love science the same way I love the idealized point, line, square or cube. They can only exist in our heads, just as science can only really exist in our heads. When it meets reality, we study causation the wrong way around, publish nonsense or study some downstream effect. And that’s before we use the scientific method to figure out how to make problems worse, like we did with insulin, along the way congratulating ourselves for our techno-scientific progress. Look at all the science the Russians used to copy the Shuttle or Concorde.

It’s like a drug addict who has a unknowing subconscious desire for a drug. They’ll use a vast amount of higher cognition and action to procure the drug and to logically prove to themselves why they need it. These higher-level faculties – the rational mind – are the servant to, not the master of, our subconscious. In the end though, it’s often-if-not-always a subconscious motivation that needs to be compassionately fixed to heal the problem. Throwing the logical downsides of drug addiction at an addict all day long doesn’t work at all.

Ah, but vaccines! And Boeing 787s! And particle accelerators! Of course – there are useful outputs of science-as-practiced-by-humans. Most of us wouldn’t be alive without them, that’s not the point. It’s that this is a tiny minority of science-as-practiced-by-humans and if anything were probably lucky accidents. After all, the guy who invented washing hands (which we all do 10 times a day now) was thrown in a lunatic asylum and died there for it discovering it and then trying to tell people about it! What a clown!

But that is ancient history, right? We’re better now!

I was walking around San Francisco once with a friend when I expressed a desire not to be caught downtown during an earthquake. He assured me that we were safe from collapsing structures since we probably now had “new concrete” that was probably much more earthquake proof. What a wonderful story! Look how easily we can invent narratives! I want some of this “new concrete” for my house! Plus, some buildings had survived previous earthquakes and were likely to be fine. Or were they weakened by previous earthquakes? Or maybe this “new concrete” if it exists has some fatal flaw. We will simply never know, we have to wait for the next earthquake to find out. And yet, here in the richest country in the world with scientists everywhere we can still build bridges that collapse as soon as you install them.

Lastly, there are real limits on our knowledge. First, Cantor’s diagonal slash puts real limits on what we can prove about anything. Cantor is why we remember Godel and Turing – it’s foundational to the computer you’re using to read this with. Second, we can’t even measure the length of a coastline thanks to fractals – as you measure things using smaller and smaller rulers the total measured length can tend to infinity!

These aren’t toys or silly extremes, they cut to the very heart about what it is possible to know (at least, using the systems of knowledge we have) even if we’re perfect and diligent science robots, let alone human beings. And, that’s only two of the constraints. There are more! In order to avoid these problems we have to limit any knowledge we try to build to small buckets of time, space and energy. Because if you study things too much, it gets very odd just like the transition from Newton to wave-quanta in physics. The more we try to pick apart reality using what we know about the human-scale world, the more odd it gets from our perspective. If you try to include the far past or future in your knowledge it all falls apart (the big bang, the heat death of the universe). If you try to include the very fast or very slow, it all falls apart (do I need to mention relativity?). If you try to include things very small or very large, it all falls apart (quanta or dark matter).

What do we do with all this? I have a clue. I’m writing a book about it, sign up to stay in the loop. You’ll only hear about the book, infrequently.

Healthcare Irony

The news is out – Atul Gawande is to lead the new Berkshire/Amazon/JP Morgan health initiative.

I love most of what Buffett says, I’ve been to Omaha and read the books. A few years ago someone in the audience at the Berkshire meeting asked if they were aware how bad sugar was. The response from Buffett was something along the lines of how happy people were at DQ and how few smiles he sees at Whole Foods, and that he himself is 40% Coca-Cola by weight as he drinks so much of it.

Sitting in the audience I thought of how it would sound if you just replaced sugar with cigarettes, and if Buffett had said something like he “loves cigarettes because they make you smile and he’s 40% Marlboro by weight”. And, yes, Buffett is quoted in Barbarians at the Gate saying this:

I’ll tell you why I like the cigarette business. It cost a penny to make. Sell it for a dollar. It’s addictive. And there’s a fantastic brand loyalty.

Because this equally applies to sugar, it’s a wonderful investment. I was randomly thinking about the Berkshire sugar investments last night and came up with this:

 Room TemperatureCold
SolidSees CandyDQ

With my investment hat on I love this too, but knowing people who’ve had cancer or diabetes it’s horrendous. The leap from sugar to cancer or T2 diabetes may come as a shock, especially if you’re versed in the “calories and exercise” theory of obesity (which has no actual evidence behind it). So the irony is that perhaps the people who’ve most profited off of bad eating decisions is leading the charge to reduce costs of the fallout. By the way, Charlie Munger is losing eyesight in his remaining eye. I’d bet this is a result of diabetic retinopathy which is a direct result of diabetes/metabolic syndrome which is just sugar intake. Of course I could be wrong.

I was struck by a quote in The Magic Pill (which is on Netflix by the way). Two quotes actually. The first was that essentially all noncommunicable disease is caused by carbohydrates and primarily sugar. This turns out to be true from all the research I’ve done. The second was multiple people being quoted saying “it can’t be that simple!”

This resonates from my youth when I was sure the government was responsible for all my problems, and that more free money was the answer. All the smart people I knew read The Economist, which I hated. Every story in that thing talked about a problem, talked about how a market didn’t exist or was broken, and then how a market would fix it. I was absolutely certain it couldn’t possibly be that simple… until I figured out it usually was.

A friend of mine in psychology said once that she didn’t like working with smart people because they’d agree about whatever psychological problem they had and the remedy, but then having proved it to themselves never do anything about it or just argue about it every session. Whereas, those not as smart would do the work, and prove whether it worked or not via the results. I see exactly the same thing on sugar and many other topics, my smarter friends tend to put a lot of faith in doctors or “science” as a theoretical concept as opposed to how it actually gets done. They’ll agree or debate endlessly rather than do the research or try things themselves. For some reason I find this deeply troubling.

It’s the difference between investing in sugar (so making money), and writing blog posts about how bad it is. The same moral quandary that some people struggle with in The China Hustle. Some of these guys try to sound the alarm on fraud and some just try to make more money. I notice that Michael Crichton tended to sound the alarm once he had money too, and became a personal hero for writing Travels.

Incidentally, the magic pill documentary describes how aboriginal people in Australia died after we convinced them that Coca-Cola was a great breakfast for toddlers. This exactly parallels what Vilhjalmur Stefansson documented in his 1960 book, which details what happened to the Eskimo when we convinced them that eating fish was a bad idea (I’m not kidding).

In any case, don’t eat sugar.


How Amazon is Winning

This week is re:invent, Amazon’s AWS conference in Las Vegas. They’ve been shipping new products about once every 15-30 minutes it seems this week. Everything from magic AI cameras to container management to new databases.

As the announcements keep coming it’s easy to feel disoriented, which may be exactly the point.

John Boyd

I’ve been re-reading Certain to Win, Chet Richards book about Boyd’s OODA loop applied to business. The essential idea is to create disorientation, confusion and withdrawal in your enemy while promoting harmony, speed and effectiveness in your own organization.

It takes a book or three to more fully understand how to do this, but at a high level:

  • The faster you can execute, the less able an enemy is able to predict your movements and counter them.
  • In order to know what to execute you need to:
    • Observe the situation,
    • Orient yourself to it using your knowledge (and possibly just act based on instinct, skipping the Decide step) and,
    • Decide what to do.

To get there as a large organization you need decentralized command. There’s simply too much going on to keep track of everything and plan. To get decentralization, you need a bunch of trust between people.

It looks like Amazon gets its trust from their principles: A fairly concise set of deciders on how to act. It can be used to some extent to resolve situations without referring to management (and therefore politics, I guess). For example, number one is customer obsession. You can use this to decide on something, a feature say, by asking if it’s good for the customer. You’d be surprised how often product managers might want to do things that work badly for customers.

This acts as the implicit guidance & control that goes from Orient to Act, skipping Decide. Or at least, it makes many decisions speedier and easier.

Speaking of speed, there’s another principle titled “Bias for Action”.

Certain to Win includes an anecdote about Yamaha trying to compete with Honda in the motorcycle market, the “Honda-Yamaha War” as it became known. Yamaha declared they’d build a huge bike factory presumably to reduce per-unit costs. Honda countered by producing about ten new bike designs for sale per month.

This huge iterative speed and innovation (trying things at random very fast) beat out Yamaha’s economic strategy of reducing costs and flooding the market. It’s the difference that Thiel tries to get across in Zero to One; the difference between building something new the first time and then copying it. Yamaha, I posit, was trying to out-copy while Honda was trying to create the new thing. Out-copying doesn’t work.

And thus with AWS and the whole of Amazon.

The bewildering speed of new products doesn’t make sense if you’re a competitor trying to keep up by copying. It’s too difficult to keep up and the environment is changing too rapidly. This is how a fast OODA loop works, by setting the timing of the battlefield. The timing, the cadence, is set here by Amazon at a rapid rate. In war, you want to be in the same place with your assets: Your tanks should be moving around and changing so fast that the speed hinders your enemies very understanding of what is happening. It just doesn’t make sense what is happening.

I can’t think of a better analogy for what Amazon is achieving here, whether or not OODA has consciously been deployed internally. If you look, Amazon quietly launches products all the time like this, re:invent isn’t a special case. The only special cases which come to mind are the Kindle and Kindle Phone launches, which if I had to bet, probably act as negative datapoints internally at Amazon anyway. Why do a big launch? My bet is they figured out it wasn’t necessary, they have their homepage and it works just as well.

Amazon will continue to win for as long as their cadence outstrips their rivals. The more rivals try to copy or model it in to simple narratives like having a great website, warehouse robots or other things which are the result of the cadence, they’lll continue to lose. As soon as they start focusing on copying Amazons DNA instead of Amazons products, they’ll win. Much like various cities try to copy having big buildings and airports, just like New York, instead of copying the free market or the US Constitution.

Incidentally, this is why I think Microsoft is doing well right now. The Windows Insider Program is testing out tons of things all the time:

Streetview with Synchronized iPhones

Can you make Google Streetview-like images quickly and cheaply? That’s an R&D question I worked on while at Telenav after putting together the original OpenStreetView pitch.

Producing street view images isn’t trivial. Typically these are captured with dedicated hardware on dedicated vehicles driven around by paid employees. 

I remember years ago talking to people building things like this. You couldn’t use standard DSLRs because the shutters were only rated for something like 100,000 exposures. This is fine for your typical prosumer but street view vehicles would be burning through a camera every week or something. Then, the car needs a bucket-load of data storage and you put a lot of miles on the vehicle very quickly. It gets expensive quick!

OpenStreetView’s solution, and Mapillary’s, is to put a phone on the dashboard pointing forward. This gets a lot of useful information but nowhere near the 360-degree view we’re used to in street view. But, the hardware is readily available (everyone has a phone) and the people using it are working for free. So it’s a great tradeoff, really.

How to get from there to 360-degree views?

At the time, dedicated spherical hardware cameras were expensive and hard to use. Think $500+ and you couldn’t talk to them. Most had a built-in SD card and could do a few preset recording modes without GPS (because, why would you need GPS?) For a half-decent camera the costs were more like $1k+

These prices were too high for even pro volunteers to spend. How could we drop the cost so that anybody could start taking  360-degree photos?

The obvious place to start is phones since they contain everything you need: cameras, compass, processing and a variety of radios. So, of course, I took an old iPhone and taped it to the roof of my truck, with a panoramic lens on it:

Old iPhone 4 devices can be found in bulk for ~$10 each which pulls the cost down. Taking photos as you drive or walk around resulted in images like this:

Notice that most of the image space is unused. If you unroll the donut you get a 360 strip, like this:

One of the few advantages of this approach is that “real” street view needs to blur things like faces and license plates. Since this strip is so low resolution, it comes pre-blurred!

You could in theory drive a car around like this and the phone could take photos and GPS points, unroll the image and upload it all in one. But… the images are pretty low resolution.

The answer is to use more than one phone:

We can use many phones in a mount. If they all take a photo at the same time then we can stitch them together and build a panorama. There turns out to be quite a lot of subtlety in the timing, capture, upload and stitching. The fundamental limit is the lens geometry of the camera. iPhones, like other devices, vary around 40ish degrees field of view. Since you need lots of overlap for a good panorama, you start to need something like 9 phones.

You can get to less phones by using wide angle lenses and changing the geometry a little:

Because of the CCD layout you get more pixels and a wider PoV in landscape.

The mounts were built with OpenSCAD. You write snippets of code (on the left) which outputs 3D shapes on the right. Here, we make some boxes and then subtract out another box to make a phone holder. Then we rotate and build many copies of them. To hold it together, there’s a thin cylinder (in blue) at the bottom. This will output a 3D file for printing.

Actually printing this in to a piece of plastic turns out to be surprisingly painful. Simplify3D helps a lot. The 3D model needs to be turned in to a set of commands for the printer to execute (move here, print a little bit of plastic, move over here…). Every printer is different. It takes a long time. We’re a long way from “just print this file” as we’re used to with printing on paper.

Measurements in the 3D model don’t quite come out in real life, either. The plastic oozes and has a set of material properties, so that it doesn’t print exactly what you send it but may be a few millimeters off. If you print walls that are too thin they will snap. You need to print a “raft” which is a layer of plastic on the print bed to print on top of, that you later snap off.

The cycle time is pretty long. Printing something can take 5-10 hours. Then you fix something, wait another 5-10 hours and so on.

The whole process is entertaining and educational, and reminds me yet again that manufacturing physical things is hard.

The resulting panoramas aren’t too bad, as you can see above. Each phone gets different lighting conditions and the photos are projected on the inside of a sphere. What you see above is just 5 phones, or about half a pano.

The software does some magic to try and sync timing. Initially I’d hoped that since the phones are (probably, hopefully) running ntpd they’d have pretty synchronous clocks. Wrong! Instead, a server (laptop) running a thin client is running the wifi network all the phones are connected to. Each phone runs an app which wakes up and connects. The server says something like “lets take a photo in 4 seconds” and the cameras all sync to this and take a photo at the same time.

They then connect again and upload their picture and a GPS point. This is nice as you get, say, 9 GPS readings per pano. Then they start again to take another set of photos.

The server software would then (and this is where it’s incomplete) take all these photos, build a pano and upload it somewhere. The panos I built were using autopano SIFT to find overlaps in the images but we could have taken compass readings too and used those alone or in conjunction to build the panoramas.

The finished image doesn’t look bad, as you can see. But it’s long and thin and has to crop the top and bottom off the images. The full pano would be much longer and thinner.

As the project progressed, two things happened.

  1. We started getting further from our goal (cheap, simple panos) not closer. Long thin pano image strips aren’t 360 views; you can’t look up and down. The cost and complexity kept going up with 3D printing, (old iOS version since it’s an iPhone 4) software to hang everything together, car mounts, charging 9 phones at once…
  2. Readily available commercial solutions came down in price and complexity. Moto and Essential phones now have cheap panorama attachments, for example. They tend to use two fisheye lenses back-to-back in a small consumer package.

So, while this was an interesting R&D experiment and a lot was learned it ultimately didn’t work out. You can find all the code for the server, iOS client and 3D files here.

A Digital Globe

“Energy Flux,” data source: National Geospatial-Intelligence Agency, September 2000.

Crowdsourcing, as a term, has been around for something like 12 years according to Wikipedia. OpenStreetMap is a little older and the idea stretches back fairly arbitrarily. Wikipedia thinks it goes back to the 1714 Longitude Prize competition. That seems like a stretch too far, but in any case, it’s been around a while.

The ability to use many distributed people to solve a problem has had some obvious recent wins like Wikipedia itself, OpenStreetMap and others. Yet, to some large degree these projects require skill. You need to know how to edit the text or the map. In the case of Linux, you need to be able to write and debug software.

Where crowdsourcing is in some ways more interesting is where that barrier to entry is much lower. The simplest way you can contribute to a project is by answering a binary question – something with a ‘yes’ or ‘no’ answer. If we could ask every one of the ~7 billion people in the world if they were in an urban area right this second, we’d end up with a fair representation of a map of the (urban) world. In fact, just the locations of all 7 billion people would mimic the same map.

Tomnod is DigitalGlobe’s crowdsourcing platform and today it’s running a yes/no campaign to find all the Weddell seals in their parts of the Antarctic.

The premise is simple and effective; repeatedly look for seals in a box. If there seals, press 1. If not, press 2. After processing tens of thousands of boxes you get a map of seals, parallelizing the problem across many volunteers.

Of course, it helps if you have a lot of data to analyze, with more coming in the door every day. There aren’t that many places in the world where that’s the case and DigitalGlobe is one of them, which is why I’m excited to be joining them to work on crowdsourcing.

Crowdsourcing today is pretty effective yet there are major challenges to be solved. For example:

  • How can we use machine learning to help users focus on the most important crowd tasks?
  • How can crowds more effectively give feedback to shape how machine learning works?
  • Why do crowds sometimes fail, and can we fix it? OpenStreetMap is a beautiful display map yet still lacks basic data like addresses. How can we counter that?

These feedback loops between tools, crowds and machine learning to produce actionable information is still in its infancy. Today, the way crowds help ML algorithms is still relatively stilted, as is how ML makes tools better and so on.

Today, much of this is kind of like batch processing of computer data in the 1960’s. You’d build some code and data on punch cards, ship them off to the “priests” who ran the computer and get some results back in a few days. Crowdsourcing in most contexts isn’t dissimilar. We make a simple campaign, ship it to a Mechanical Turk-like service and then get our data back.

I think one of the things that really separates us from the high primates is that we’re tool builders. I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. And, humans came in with a rather unimpressive showing, about a third of the way down the list. It was not too proud a showing for the crown of creation. So, that didn’t look so good. But, then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And, a man on a bicycle, a human on a bicycle, blew the condor away, completely off the top of the charts.
And that’s what a computer is to me. What a computer is to me is it’s the most remarkable tool that we’ve ever come up with, and it’s the equivalent of a bicycle for our minds. ~ Steve Jobs

In the future, the one I’m interested in helping build, the links between all these things is going to be a lot more fluid. Computers should serve us, like a bicycle for the mind, to enhance and extend our cognition. To do that, the tools have to learn from the people using them and the tools have to help make the users more efficient.

This is above and beyond the use of a hammer, to efficiently hit nails in to a piece of wood. It’s about the tool itself learning, and you can’t do it without a lot of data.

This is all sounding a lot like clippy, a tool to help people use computers better. But clippy was a child of the internet before it was the internet it is today. Clippy wasn’t broken because of a lack of trying, or a lack of ideas. It was broken from a lack of feedback. What’s the difference between clippy and Siri or “ok, Google”? It’s feedback. Siri gets feedback in the billions of internet-connected uses every day where clippy had almost no feedback to improve at all.

Siri’s feedback is predicated upon text. Lots and lots of input and output of text. What’s interesting about DigitalGlobe’s primary asset for crowd sourcing is all the imagery, of a planet that’s changing every day. Crowdsourcing across imagery is already helping in disasters and scientific research and 1,001 other fields with some simple tools on websites.

What happens when we add mobile, machine learning and feedback? It’ll be fun to find out.


Your phone knows where it is thanks to a suite of sensors that basically try to measure everything they possibly can about their environment. Where does the GPS think I am? What orientation is the device in? What WiFi networks can I see? What are the nearby Bluetooth devices? Have I been moving around a lot lately, accelerometer? What cell phone networks am I connected to?

Unless you’re standing in a field in Kansas with a clear view of the sky for ten minutes (so your GPS has lots of time to settle), your location will be questionable.

The original iPhone used WiFi network data to figure out where it was, because a GPS wasn’t included. Skyhook (I think it was…) drove cars around major cities sniffing for networks while recording their geolocation. Then an iPhone could look up its location by comparing what networks it could see to the database of network locations. Then, it could start adding networks not in the database it could also see at the same place.

As phones added all kinds of sensors, these databases grew and became free-floating associations of place information. We can now correlate almost anything with where you are so that if the GPS doesn’t work (because you’re inside a building, say), devices fail-over to what other clues they have to figure out where you are.

Integrating all this information is still a challenge, especially if you’re driving around a major city. The reliability of all the location signals are questionable as Pete Tenereillo outlined in a recent LinkedIn post. Driving around San Francisco, you’re still subjected to the map jumping all over the place even with high end phones and the latest software.

How users experience this can happen at the other end too, when you see your uber or delivery driver jumping around the map on their way to you:

As well as finding your location, many apps want to store it too. There’s 1,001 ways to do that. Different amounts of data, different formats, different places to send it. What ends up happening, quite reasonably, is that various location-based app developers both capture and store location data in many different ways, and there are paid-for APIs and SDKs to help with pieces of the puzzle.

What’s changed over time is the value of this data. Aggregating vast amounts of anonymized location data can help with use-cases such as building base maps for example. If you take all the GPS traces of everyone every day, you can figure out where all the roads are and their speed limits and so on. This data is equally valuable for other uses; advertising and predicting stock prices as two examples. If you know how many people went to WalMart this week you have an indication of their stock value. Things like this appear to have driven the new $164M round for Mapbox – “Mapbox collects more than 200 million miles of anonymized sensor data per day”.

What’s lacking is an open and standardized way to capture and store this data. Enter OpenLocate, an open iOS and Android SDK to simplify capture and storage of location data.

It’s supported by a long list of backers and it should remove a bunch of work when developing anything location-based, much as Auth0 removes having to set up custom authentication. For more, see the announcement blog post here!

Social Media

Scrolling social media, I’m reminded of this scene in The Hunger Games: Catching Fire:

She’s engaged. Make everything about that. What kind of dress is she gonna wear? – floggings. What’s the cake gonna look like? – executions. Whose gonna be there? – fear. Blanket coverage. Shove it in their faces.”

It’s startlingly accurate. As I scroll today, this is what I see:

  • Smiling faces at dinner
  • Government is going to kill you by taking away healthcare
  • Someone graduating college
  • Debt is at an all-time high
  • A beautiful picture of a morning sky
  • Buy this Bluetooth echocardiogram chest strap in case you have a heart attack


Powered by WordPress. Designed by WooThemes