There’s an old saying: “I don’t know what programming language scientists and engineers will use in the 22nd century, but I know it will be called FORTRAN.” FORTRAN was among the first real programming languages and, along with LISP, one of the oldest still in common use. If you are one of those that still loves FORTRAN, you no longer have to be left out of the Web development craze thanks to Fortran.io.
Naturally, the Fortran.io site is served by — what else — FORTRAN. The system allows for Jade templates, SQLite databases, and other features aimed at serving up web pages. The code is hosted on GitHub, and you can find several examples there, as well.
The Apple II was the machine that many say launched Apple as a company. As with many popular computers of the 1980s, the Apple II maintains a steady following to this day who continue to develop new hardware and software to keep the platform alive.
[deater] had scored an Uthernet II Ethernet interface for his Apple IIe, based off the venerable W5100 chipset. He decided to have some fun and wrote a webserver for the Apple II in BASIC. The program sets up the Ethernet card with a series of PEEKs and POKEs, and then listens out for incoming packets before responding with the requisite data loaded from floppy disk.
The server can deal with HTML, text, and even JPEG and PNG images. It’s even compliant with RFC 2324. It does suffer from some limitations however — the disk format used can only hold 140 kB, it can only serve an 8kB file at a time, and due to using a lot of string manipulation in the code, is painstakingly slow.
Before you get too excited, the machine is running on a local network only, so you can’t check it out from here. However, [deater] has kindly released the source code if you wish to run it for yourself.
We’ve seen networks built over some interesting mediums, but QR codes has to be a new one. [Eric Seifert] decided to try to use QR codes to make an IP connection. He used these visual codes to create a bi-directional connection between two camera-equipped computers. He’s a persistent chap, because it works: in one of his videos, he shows an SSH connection between two devices.
He faced a number of challenges on the way. Although there is plenty of code to read QR codes, the data that can be encoded and read from them is limited. There is a binary mode that can be used with QR codes, but it is really inefficient. [Eric] decided to use base32 coding instead, packing the data into each frame as alphanumeric text. Each QR code image that is created and received is numbered, so the system can keep track and request any lost images. He also had some problems with keeping the data consistent between the encoded and decoded versions, so he had to add some packing to the data before it would work. It uses Python-pytun to create a TUN/TAP device that carries the data.
The speed of the connection is rather slow: in his demo video, the two computers take over a minute to exchange keys for an SSH connection, and [Eric] measured the speed of the connection at about 100 bits per second. But even getting something like this working at all is a significant achievement. He has published his code on GitHub.
We’ve featured the work of [Eric] before: he created a data connection using an iPod FM transmitter.
Back when the original Internet, the digital one, was being brought together there was a vicious standards war. The fallout from the war fundamentally underpins how we use the Internet today, and what’s surprising is that things didn’t work out how everyone expected. The rebel alliance won, and when it comes to standards, it turns out that’s a lot more common than you might think.
Looking back the history of the Internet could have been very different. In the mid eighties the OSI standards were the obvious choice. In 1988 the Department of Commerce issued a mandate that all computers purchased by government agencies should be OSI compatible starting from the middle of 1990, and yet two years later the battle was already over, and the OSI standards had already lost.
In fact by the early nineties the dominance of TCP/IP was almost complete. In January of 1991 the British academic backbone network, called JANET (which was based around X.25 colored book protocols), established a pilot project to host IP traffic on the network. Within ten months the IP traffic had exceeded the levels of X.25 traffic, and IP support became official in November.
“Twenty five years ago a much smaller crowd was fighting about open versus proprietary, and Internet versus OSI. In the end, ‘rough consensus and running code’ decided the matter: open won and Internet won,”
—Marshall Rose, chair of several IETF Working Groups during the period
This of course wasn’t the first standards battle, history is littered with innumerable standards that have won or lost. It also wasn’t the last the Internet was to see. By the mid noughties SOAP and XML were seen as the obvious way to build out the distributed services we all, at that point, already saw coming. Yet by the end of the decade SOAP and XML were in heavy retreat. RESTful services and JSON, far more lightweight and developer friendly than their heavyweight counterparts, had won.
“JSON appeared at a time when developers felt drowned by misguided overcomplicated XML-based web services, and JSON let them just get the job done,”
Yet, depending on which standards body you want to listen to, ECMA or the IEFT, JSON only became a standard in 2013, or 2014, respectively and while the IETF RFC talks about semantics and security, the ECMA standard covers only the syntax. Despite that it’s unlikely many people have actually read the standards, and this includes the developers using the standard and even those implementing the libraries those developers depend on.
We have reached the point where standardization bodies no longer create standards, they formalize them, and the way we build the Internet of Things is going to be fundamentally influenced by that new reality.
The Standardization of IoT
Right now there’s a new standards body or alliance, pushing their own standards or groups of standards, practically every month or so. And of course there are companies, Samsung for instance, that belong to more than one of these alliances. I think it’s unlikely that these bodies will create a single standard to rule them all, not least because many Internet of Things devices are incapable of speaking TCP/IP. The demise of Moore’s Law may well mean that the entire bottom layer, the cheap throw away sensors, will never speak TCP/IP at all. It will not, as they say, be turtles all the way down.
These bodies also move slowly. Despite the fact that the member companies live on Internet time, no standards body does. The “rough consensus and running code” of the IETF era will not be replicated by today’s standards bodies. Made up of companies, not people, they’re not capable. Instead that consensus will be built outside of the existing standards bodies, not inside them.
“Today, the industry is looking at a much harder set of problems. My guess is that we’re going to end-up throwing a lot of stuff — products, code, and architecture — away, again, and again, and again. The pressure to deploy is much higher now than it was then,”
We’re Stuck in the Unknown
No one really knows how this is going to shake out right now, and obviously the outcome of that standards battle, which I think is going to take at least a decade, will have a fundamental influence on the path our technology takes. But I don’t guarantee that any of the current players will be emerging victorious. In fact, I think there will be another rebellion much like we saw with the original network standards. Despite the rhetoric from the standards bodies I actually think most of the current architectures don’t stand much of a chance of mass adoption.
I think any architecture that stands a chance is going to have to be a lot flatter than most of the current ones—with things actually talking to other things rather than people. Significantly absent from most, if not all, of the current architectures is a degree of negotiation and micro-transaction amongst the things themselves. As the number of things grow the human attention span, the amount of interest you have in micro-managing your things, means that we simply won’t.
Beyond that, architectures that stand a chance of making the next generation of Internet of Things devices work needs to deal with selective sharing of data; both sharing of subsets of data from individual things, or a superset from multiple things. Right now we’re seeing those emerging proto-standards in interesting ways. For a brief period of time it looked like Twitter was going to become a protocol. It could, in fact, have been the protocol.
Twitter Could Have Been the Standard
Back in 2010, Twitter proposed something called ‘annotations,’ it was an experimental project where you could attach 1kb of JSON to each tweet. Annotations could be strings of text, a URL, a location tag, or arbitrary bits of data. It would have fundamentally changed the way Twitter operated.
It could, in other words, have become the backbone network — a message bus. Not just for moving data, but for moving apps. With an appropriately custom client, you could have attached small applications to a tweet. Moving code to data, rather than data to code.
Building something like this is really hard, a classic social network chicken and egg proposition. But Twitter already had the users and, at least at the time, an army of third party developers. It was not to be, by the end of 2011 they were alternative history.
“Annotations is still more concept than reality. Maybe some day we’ll have more to say about them”
Perhaps they dropped the idea because they could see, not it failing, but it being too successful. With the accompanying calls for openness, and the invasion of the clones that would duplicate Twitter at the API level, if not at the backend.
As with Everything: IoT as a Service
Right now perhaps the easiest way to get one Internet of Things device to talk to another isn’t a standard, it’s a service. Right now the standard Internet of Things messaging bus belongs to one company, and that company is IFTTT. “If This Then That” is currently one of the few ways that consumers can get the incompatible things in their life to talk to one another. For someone building a device, that doesn’t come cheaply.
In the long term however it’s unlikely we’re going to let one company become the backhaul for consumer Internet of Things traffic. It’s unlikely that there will be one platform to rule them all. I don’t think it’s going to be long till IFTTT starts to see some complaints about that, and inevitably clones.
In the end I think the standard (or realistically the multiple standards) that will become the Internet of Things as we know it, or will know it, currently sit as “slide ware” being pitched to venture capitalists. The standards exist as throw away slides, where the founders wave their hands and say “We’ll be doing this, so we can do this other thing that makes money.”
The standards for the Internet of Things will be a rebellion against the standards bodies. It will be developers deciding that what they’re doing is good enough for now, that they should do it that way untill people make up their minds about what we all really should be doing. Whatever that is will end up being good enough for everybody and will win this particular standards war.
[Victor-Chew] is tired of setting clocks. After all, here we are in the 21st century, why do we have to adjust clocks (something we just did for daylight savings time)? That’s why [Victor] came up with ESPClock.
Based on a $2 Ikea analog clock, [Victor] had a few design goals for the project:
Automatically set the time from the network
Automatically adjust for daylight savings time
Not cost much more than a regular clock
Run for a year on batteries
The last goal is the only one that remains unmet. Even with a large battery pack, [Victor’s] clock runs out of juice in a week or so. You can see some videos of the clock syncing with network time, below.
It is easy to armchair quarterback, but we think [Victor] should investigate putting the processor in a deep sleep mode for most of the time. That probably means you’d need a button to wake it up for configuration and there might be some other modifications required.
If you want to waste a few more ESP modules, you could try this clock instead.
So you’ve built out your complete home automation setup, with little network-connected “things” scattered all around your home. You’ve got net-connected TVs, weather stations, security cameras, and whatever else. More devices means more chances for failure. How do you know that they’re all online and doing what they should?
[WTH]’s solution is pretty simple: take a Raspberry Pi Zero, ping all the things, log, and display the status on an RGB LED strip. (And if that one-sentence summary was too many words for you, there’s a video embedded below the break.)
Before you go screaming “NOTAHACK!”, we should let you know that [WTH] already described it as such. This is just a good idea that helps him keep track of his hacks. But that doesn’t mean that there aren’t opportunities for hacking. He uses the IFTTT service and Google Drive to save the ping logs in a spreadsheet, but we can think of about a billion other ways to handle the logging side of things.
For many of us, this is a junk-box build. We’re sure that we have some extra RGB LEDs lying around somewhere, and spare cycles on a single-board-computer aren’t hard to come by either. We really like the simple visual display of the current network status, and implementing something like this would be a cheap and cheerful afternoon project that could make our life easier and (even more) filled with shiny LEDs. So thanks for the idea, [WTH]!