Today’s project, ESP32lights, is a smart device based on the esp32 chip.

esp32lights-004 esp32lights-005

Thanks to ESP32lights you can turn a load on and off (I used it for my christmas lights)

  • manually
  • based on daily schedules
  • based on the light intensity

ESP32lights connects to your wifi network, can be configured and operated via a web browser and it’s optimized for mobile devices (responsive web interface based on jQuery Mobile).


The heart of ESP32lights is the Lolin32 Lite devboard by Wemos. One of its digital pins is connected to a relay module, which controls the load. Two digital pins are assigned to the first i2c controller of the esp32 chip and are connected to a BH1750 light intensity sensor. All the elements are powered by an HLK-PM01 module by Hi-Link, which directly converts the mains’ 220V AC to 5V DC without the need of any external components:


All the components are placed in a waterproof enclosure, to be able to use the device outdoor:

esp32lights-006 esp32lights-007


The firmware for the esp32 devboard is available in my Github repository.

In a following paragraph I’ll explain how it works. If you just want to build the device, you can program the firmware as it follows:

1) clone my repository in a local folder of your PC (you also have to install the development env esp-idf):

2) configure the correct settings for your wifi network and your timezone via menuconfig:


3) compile and flash the firmware:

make flash

4) store the image of the SPIFFS partition into the flash memory (replace the COM port of your devboard and the path where you saved the img file):

python $IDF_PATH/components/esptool_py/esptool/ --chip esp32 --port COM15
 --baud 115200 write_flash --flash_size detect 0x180000 /home/esp32lights.img

If everything is ok, when you connect to the serial console of the devboard (make monitor) you should see the following output:



ESP32light publishes an HTTP interface you can use to set the schedules or the light intensity threshold or to manually turn the load on and off.

You can open the web interface connecting – through a PC or a smartphone -to the address http://<esp_ip> (the IP address of the board is displayed in the serial output as shown in the previous paragraph).

The interface has 3 tabs, one for each working mode:


The page footer displays the actual working mode and the relay status:


In this short video, you can see how the device works:


I developed the firmware for ESP32lights leveraging what I explained in my previous tutorials about the esp32 chip. If you follow my blog, you probably understood that I really like the divide et impera method, that is divide a complex project into small, simpler tasks.

All the configuration settings of ESP32lights (actual working mode, start and stop time…) are stored in the NVS partition of the flash memory, as I explained in this tutorial.  In this way, it’s possible to keep them even if the chip is restarted:

nvs_handle my_handle;
int working_mode;
esp_err_t err = nvs_flash_init();
err = nvs_open("storage", NVS_READWRITE, &my_handle);
err = nvs_get_i32(my_handle, "mode", &working_mode);

The different elements of the web interface (html page, css style sheets…) are stored in an SPIFFS partition. In a previous tutorial you learned how to prepare the image and, in your program, get its content:


In other tutorials I’ve also already explained you how to connect to a wifi network and how to use digital pins.

The setup phase is completed after having configured the BH1750 light intensity sensor. This sensor offers an i2c interface and therefore can be connected to one of the two i2c controllers of the esp32 chip as shown in this tutorial. In my program I included a driver developed by pcbreflux.

The main program runs two different tasks:

xTaskCreate(&http_server, "http_server", 20000, NULL, 5, NULL);
xTaskCreate(&monitoring_task, "monitoring_task", 2048, NULL, 5, NULL);


The first one publishes the web interface, while the second one verifies – every second – if conditions exist (time or light intensity) to turn the load on or off:

if(working_mode == MODE_LIGHT && lux_valid) {
  int actual_light_value = get_light_value();
  if(actual_light_value < lux) {
    if(relay_status == false) {
      gpio_set_level(CONFIG_RELAY_PIN, 1);
      relay_status = true;

Here’s in details how the http server fetches a static resource, stored in the SPIFFS partition.

First it adds to the resource path the root prefix for the SPIFFS partition (/spiffs):

sprintf(full_path, "/spiffs%s", resource);

then it checks if the resource exists in the partition:

if (stat(full_path, &st) == 0) {

if so, it opens the file in read mode:

FILE* f = fopen(full_path, "r");

and sends the content of the file to the client, reading blocks of 500 bytes:

char buffer[500];
while(fgets(buffer, 500, f)) {
  netconn_write(conn, buffer, strlen(buffer), NETCONN_NOCOPY);

Finally, this is how the web interface works. The interface is made by an html page (index.html) which uses jQuery to perform AJAX requests to the server and update the different page elements. You don’t need to enter the page name in the browser because of the http server automatically performs a redirect to it if the default page is requested:

if(strstr(request_line, "GET / "))
  spiffs_serve("/index.html", conn);

endpoints are published by the server and accessed using AJAX calls::

  • setConfig, to send a new configuration
  • getConfig, to read the actual configuration
  • getLight, to get the actual light intensity

When the page is loaded, it calls the getConfig endpoint to display the actual configuration; moreover it schedules every 5 seconds a call to the getLight endpoint to keep the light value updated:

setInterval("refreshLightLevel()", 5000);

When you click on the SET button, the page calls setConfig to send to the server the new configuration:


All the information are sent using the JSON format. The esp-idf framework includes the cJSON library which makes it easy to create or parse a json message:

cJSON *root = cJSON_Parse(body);
cJSON *mode_item = cJSON_GetObjectItemCaseSensitive(root, "mode");
cJSON *root = cJSON_CreateObject();
cJSON_AddNumberToObject(root, "lux", light_value);
char *rendered = cJSON_Print(root);

Making of

I started the build of the device cutting a perfboard to the size of the enclosure:

esp32lights-008 esp32lights-009

The perfboard is screwed to the enclosure using two spacers:

esp32lights-010 esp32lights-011

I made two holes in one side of the enclosure for the main switch and for a status led:

esp32lights-012 esp32lights-013

I soldered all the different components on the perfboard and made the electric connections using wires:

esp32lights-014 esp32lights-015

To simplify the installation, all the external components (led, relay module…) are connected using jumpers:

esp32lights-016 esp32lights-017

First test:


I attached the light sensor to the top of the enclosure, after having made a hole to allow it to “see” the external light:

esp32lights-019 esp32lights-020

Finally I made the external connections, installing the main switch:

esp32lights-021 esp32lights-022

and connecting the output of the relay module to a wire with an universal plug at its end:

esp32lights-023 esp32lights-024

FPVising a Eachine H8 mini

The Eachine H8 mini quadcopter (here on Banggood) is without dubt one of the most popular quadcopters, both for its low price (10-15€) and for its good performance in terms of speed and flight time.

This quadcopter does not include a camera and therefore it cannot be used to flight in FPV (First Person View) mode… in this post I’m going to show you how to modify it to add this feature with few euros (or dollars ;) )!

There are other interesting mods for this quadcopter, for example silver13 developed some opensource alternative firmwares (acro and dual-mode) while goebish decoded its communication protocol and it’s now possible to control the miniquad with other transmitters.

Shopping list

To modify your H8 mini you will need:

In addition you can buy a dedicated antenna, but you can also replace it with just some wire.


Smoke test

Before working on the miniquad, let’s perform a smoke test on all the components to verify if they work individually.

Let’s start with the camera and the transmitter: solder the camera’s yellow wire to the VIN (Video IN) pin of the transmitter and the black wire to GND (ground). Now solder a piece of wire to the ANT (antenna) pin of the transmitter and, using a 5v external power supply, power both the camera and the transmitter:

h8fpv-02 h8fpv-03

Using a 5.8GHz receiver you can verify that they work fine:

h8fpv-04 h8fpv-05

Now test the step-up regulator. Connect a 1 cell LiPo to pins  + and – and verify with a multimeter that the output of the regulator is near to 5V (voltage required to power the other components):

h8fpv-06 h8fpv-07


Once verified that all the components are working, you can start modifying the quadcopter.

With a screwdriver remove the 4 screws to open the plastic case and remove the printer circuit board:

h8fpv-08 h8fpv-09

Identify the two pads connected to the battery and solder two wires; you’ll use them to power the new components:

h8fpv-10 h8fpv-11

Connect the outputs of the step-up converter to pins GND and VCC of the transmitter, then solder the two wires to its input pins. I also added a small switch to be able to turn the FPV system on/off independently:

h8fpv-12 h8fpv-13

With some hot glue stick the converter and the transmitter to the bottom of the quadcopter and the camera on the front of it. Connect the camera to the transmitter as already explained during the smoke test and solder the antenna. If you chose to use a wire as antenna, you can add a small plastic tube to keep it in vertical position:

h8fpv-14 h8fpv-15

Your H8 mini is now ready for its first FPV flight!

h8fpv-16 h8fpv-17

What’s So Bad about the Imperial System Anyway?

As a Hackaday writer, you can never predict where the comments of your posts will go. Some posts seem to be ignored, while others have a good steady stream of useful feedback. But sometimes the comment threads just explode, heading off into seemingly uncharted territory only tangentially related to the original post.

Such was the case with [Steven Dufresne]’s recent post about decimal time, where the comments quickly became a heated debate about the relative merits of metric and imperial units. As I read the thread, I recalled any of the numerous and similarly tangential comments on various reddit threads bashing the imperial system, and decided that enough was enough. I find the hate for the imperial system largely unfounded, and so I want to rise to its defense.

Did you measure that room in 'feet', or in 'flip-flops'?
Did you measure that room in ‘feet’, or in ‘flip-flops’?

What is a system of units anyway? At its heart, is just a way to measure the world. I could very easily measure the length and width of a room using my feet, toe to heel. Most of us have probably done just that at some point, and despite the inconvenient and potentially painful problem of dealing with fractionalization of your lower appendage, it’s a totally valid if somewhat imprecise method. You could easily pace out the length of the room and replicate that measurement to cut a piece of carpet, for instance. It’s not even that much of a stretch to got to the home center and buy carpet off the roll using your personal units — you might get some strange looks, but you’ll have your personal measuring stick right with you.

The trouble comes when you try to relate your units to someone not in possession of your feet. Try to order carpet online and you’ll run into trouble. So above and beyond simply giving us the tools to measure the world, systems of units need to be standardized so that everyone is measuring the same thing. Expanding trade beyond the dominion where one could refer to the length of the king’s arm and have that make sense to the other party was a big driver of the imperial system first, and then the metric system. And it appears to be one of the big beefs people have regarding the United States’ stubborn insistence on sticking with our feet, gallons, and bushels.

How Ridiculous are We Talking?

quote-definition-of-a-meterThe argument that imperial units are based on ridiculous things like the aforementioned king’s arm? That’s not an argument when a meter was originally defined as one 10-millionth of the distance from the north pole to the equator. Even rigorously defined relative to the speed of light or the wavelength of krypton-86 emissions in a vacuum, the meter is based on phenomena that are completely inaccessible to the people who will use is, and unrelated to their daily lives. At least everyone has seen a foot that’s about a foot long.

Doing the conversions between imperial units and SI units is tedious and error prone, they say. Really? Perhaps I’d buy that argument a hundred years ago, or even fifty. But with pervasive technology that can handle millions of mathematical operations a second, there’s not much meat on that bone. I’ll grant you that it’s an extra step that wouldn’t be needed if everyone were on the same system, and that it could lead to rounding errors that would add up to quite a bit of money over lots of transactions. But even then, why is that not seen as an opportunity? Look at financial markets — billions are made every day on the “slop” in currency exchanges. I find it unlikely that someone hasn’t found a way to make money off unit conversions too.

Another point of contention I often see is that imperial units make no sense. Yes, it’s true that we have funny units like gills and hogshead and rods and chains. But so what? Most of the imperial system boils down to a few commonly used units, like feet and gallons and pounds, while the odder units that once supported specialized trades — surveyors had their rods and chains, apothecaries had their drams and grains — are largely deprecated from daily life now.

Deal with It

For the units that remain in common use, the complaint I hear frequently is, “Why should I be forced to remember that there are 5,280 feet in a statute mile? And why is there a different nautical mile? Why are there 12 inches in a foot anyway? A gallon has four quarts, why does that make sense?” And so on. My snappy retort to that is, again, “So what?” If you’re not a daily user of the imperial system, then don’t bother yourself with it. Stick to metric — we don’t care.

If you’re metrified and you’re forced to use imperial units for some reason, then do what a lot of us imperials have to do — deal with it. I’m a scientist by training, and therefore completely comfortable with the SI system. When I did bench work I had to sling around grams, liters, and meters daily. And when I drove home I saw (and largely obeyed) the speed limit signs posted in miles per hour. No problems, no awkward roadside conversations with a police officer explaining that I was still thinking in metric and thought that the 88 on my speedometer was really in km/h and I was really doing 55. If I stopped at the store to pick up a gallon of milk and a couple of pounds of ground beef for dinner, I wasn’t confused, even if I slipped a 2-liter bottle of soda into the order.

At the end of the day, I don’t really see what all the fuss is about. Imperial and metric both have their place, and each system seems to be doing its job just fine. If your argument is that imperial units are inelegant and awkward, even though you’re correct I don’t think that’s enough to sway the imperial holdouts. And if you’re just upset because we’re being stubborn and won’t join the enlightened metric masses, then I think you’re probably going to be upset for a long time to come.

Filed under: Featured, Interest, rants

Get Ready for the Great Eclipse of 2017

On August 21, 2017, the moon will cast its shadow across most of North America, with a narrow path of totality tracing from Oregon to South Carolina. Tens of millions of people will have a chance to see something that the continental US hasn’t seen in ages — a total eclipse of the sun. Will you be ready?

The last time a total solar eclipse visited a significantly populated section of the US was in March of 1970. I remember it well as a four-year-old standing on the sidewalk in front of my house, all worked up about space already in those heady days of the Apollo program, gazing through smoked glass as the moon blotted out the sun for a few minutes. Just watching it was exhilarating, and being able to see it again and capitalize on a lifetime of geekiness to heighten the experience, and to be able to share it with my wife and kids, is exciting beyond words. But I’ve only got eight months to lay my plans!

Where and When

First, the basics. Totality will cross the Pacific coast at 17:15 UTC just north of Depoe Bay, Oregon. It will proceed across southern Idaho into Wyoming – Grand Teton and Yellowstone visitors will have quite a treat – then Nebraska, a tiny corner of Kansas, Missouri, small slivers of Illinois and Kentucky, across Tennessee and a fraction of North Carolina, finally heading out to sea between Charleston and Myrtle Beach, South Carolina at 18:49 UTC. Need to see how close you are to totality and when you can expect the eclipse to start? NASA has put together a handy interactive Google Map for just that purpose.

The Eclipse of 2015. Source: NASA eclipse web site
The Eclipse of 2015. Source: NASA eclipse web site

Your first task is to decide where you’re going to watch events unfold. Assuming you want to witness totality, quite a few major cities are in or very near the path – Salem, Oregon; Boise, Idaho; Lincoln, Nebraska; Kansas City and St. Louis; and Nashville, Tennessee. Viewing opportunities will abound in and around these cities, so it won’t be much of a chore to step outside at the appointed hour. However, I’ve heard that the sight of the moon’s shadow racing across the land is especially exciting if you can get somewhere elevated. So on the 21st you’ll find me sitting on the top of Menan Butte outside of Rexburg, Idaho, watching the shadow approach across the plains to the west.

It’s worth noting that the path of totality east of the Mississippi is within a reasonable day’s drive of about half the population of the United States. If you need to travel to get to totality, you’ll need to think ahead, because you’re going to be competing with a lot of other eclipse watchers in addition to the usual summer travelers. Destination locations, like national parks and major resort areas, are likely to be booked. In fact, it may be too late already — I can’t find a hotel room in Idaho Falls for that weekend to save my life. Looks like we’ll be camping by the side of the road.

How to Observe

Eclipse glasses are a must. Source: Sky and Telescope
Eclipse glasses are a must. Source: Sky and Telescope

Once you decide where to be and make the appropriate sacrifices to the weather deity of your choice for clear skies, what are you going to do? Most people will be content with just watching, but no matter where you go there are likely to be a ton of people and a party atmosphere, so be prepared to be sociable.

For direct viewing before totality, you’ll want to think about eye safety. At more populated viewing sites, vendors will no doubt be doing a brisk business selling eclipse glasses at incredible markups, so you might want to order yours ahead, and maybe have a few extras to share with unprepared watchers. A shade 14 welding helmet filter will also do the trick, as will fully exposed and developed black and white photo film, as long as it’s a silver-based film. Pinhole cameras are a good choice too, but you’ll need at least a meter focal length to project a decent image. If you don’t feel like toting a refrigerator box around, projecting the image from a telescope or binoculars onto a screen is a good way to go too.

And don’t forget to bring a flashlight – it’ll be as dark as night for the few minutes that it takes for the moon’s shadow to pass.

Eclipses Aren’t Just for Watching

Hackers and space geeks might not be content to just watch, of course. Personally, I’ll be tending an array of cameras to capture the event, as I suspect many others will. Many ham radio operators will be trying to use daytime ionospheric skip to work long-distance contacts during the eclipse, and there are some coordinated efforts to conduct experiments during the eclipse. Others with a scientific bent and the right resources might choose to replicate Sir Arthur Eddington’s confirmation of Einstein’s General Relativity during a 1919 solar eclipse; the bright star Regulus in the constellation Leo will be close enough to the sun to allow measurement of the gravitational lensing Einstein predicted. And you might even be able to get funding for public outreach efforts to enhance the viewing experience.

No matter how you choose to spend Eclipse Day 2017, enjoy it. If you do happen to miss it, don’t worry — the US gets treated to another total eclipse in 2024.

And if you happen to find yourself on Menan Butte outside of Rexburg, Idaho, come on over and say hi.

Filed under: Current Events, Featured, news, Original Art, solar hacks

Pulsed Power and its Applications

Pulsed power is a technology that consists in accumulating energy over some period of time, then releasing it very quickly. Since power equals energy (or work) divided by time, the idea is to emit a constant amount of energy in as short a time as possible. It will only last for a fraction of a second though, but that instantaneous power has very interesting applications. With this technology, power levels of more than 300 terawatts have been obtained. Is this technology for unlimited budgets, or is this in reach of the common hacker?

Consider for example discharging a capacitor. A large 450 V, 3300 uF electrolytic capacitor discharges in about 0.1 seconds (varies a lot depending on capacitor design). Since the energy stored in it is given by 1/2 CV², which gives 334 Joules of energy, the power delivered will be 3340 watts. In fact a popular hacker project is to build large capacitor banks. Once you have the bank, and a way to charge it, you can use it to power very interesting devices such as:

A portable, 1.25 kJ coilgun by [Jason Murray]

Railguns in particular are subject to serious research. You may have read about the navy railgun, capable of reaching a muzzle speed of more than 4,600 mph (around Mach 6), more than any other explosive-powered gun. Power is provided by a 9-megajoule capacitor bank. The capacitors discharge on two conducting rails, generating an electromagnetic field that fires the projectile along the rails. The rail wear due to the tremendous pressures and currents, in the millions of amperes range, is still a problem to be solved. lgmx

Another device that uses capacitors for high power pulses is the Marx generator. It is a very simple circuit that allows you to charge a number of capacitors in parallel and then suddenly discharge them in series using spark gaps. Very large Marx generators have been built, for high voltage component testing and other purposes, but it’s also very easy to make a small lightning simulator in under an hour if you have some high voltage capacitors and resistors. Marx generators are in use in the Z machine, a Sandia National Labs project for fusion research, that is capable of shooting 26 million amperes in 95 nanoseconds. Temperatures of 3.7 billion kelvins have been obtained.

The Marx generator is a particular case of a pulse forming network, or PFN. Capacitors, inductors and transmission lines, or a combination of them are used for energy storage in various topologies. Then, the network is discharged into the load via a high voltage switch

Transmission line PFN. By Chetvorno, via Wikimedia commons.
Transmission line PFN. By Chetvorno, via Wikimedia commons.

such as a spark gap or a thyratron. The transmission line PFN is interesting because the capacitance of the conductors in the line is used for both transmission and energy storage. When the power supply is connected it slowly charges up the capacitance of the line through RS. When the switch is closed, a voltage equal to V/2 is applied to the load, the charge stored in the line begins to discharge through the load a current of V/2Z0 and a voltage step travels up the line toward the source.

Compulsators (a portmanteau for compensated pulsed alternator) are another way of delivering high current pulses. They convert rotational energy from a flywheel directly into electrical energy. The compulsator works in a similar way as a normal alternator, but is designed with minimal inductance windings to deliver extremely high currents in very short time periods. There is little information on compulsator design and, as far as we know, no hobbyist has ever made one. You have your homework assignment.

Alternator vs compulsator designs. From Weldon et al.
Alternator vs compulsator simplified designs. From Weldon et al.

The explosively pumped flux compression generator, or EPFCG for short, is a device that generates a high power electromagnetic pulse using a high explosive to compress the magnetic flux. Million of amperes and tens of terawatts of power are produced by the EPFCG in a single pulse, since the device is destroyed in operation.

Croquant, via Wikimedia commons
Steps in flux compression. By Croquant, via Wikimedia commons.

The three basic steps in flux compression are shown above.

  1. An external magnetic field threads a closed ring conductor.
  2. The ring’s diameter is reduced by the explosive. The variation of the magnetic flux induces a current in the ring, which in turn creates a new magnetic field, so that the total flux in the interior of the ring is maintained.
  3. The external and induced magnetic fields add up so that the total magnetic flux remains constant, and a current is created in the ring.

The compression process allows the chemical energy of the explosives to be (partially) transformed into the energy of an intense magnetic field surrounded by a correspondingly large electric current. There are several designs of EPFCG’s. The figure shows the hollow tube type.


Pulsed power is also used in particle accelerators and high power lasers and the technology is rapidly evolving.

If you’re starting out, you may want to experiment with capacitor banks which are a relatively simple way of obtaining pulsed power. But if you do, take all necessary precautions. The power levels can be extremely dangerous.

Filed under: Featured, misc hacks

The Wright Flyer: Engineering And Iterating

The types of steps and missteps the Wright brothers took in developing the first practical airplane should be familiar to hackers. They started with a simple kite design and painstakingly added only a few features at a time, testing each, and discarding some. The airfoil data they had was wrong and they had to make their own wind tunnel to produce their own data. Unable to find motor manufacturers willing to do a one-off to their specifications, they had to make their own.

Sound familiar? Here’s a trip through the Wright brothers development of the first practical airplane.

Starting Out: Kites And Gliders

To give you an idea of their background, neither Orville nor Wilbur Wright had aeronautical training. Wilbur completed high school whereas Orville dropped out to pursue the printing business with Wilbur. For that, they’d designed and built their own printing press. In 1890, when the introduction of the safety bicycle caused a boom in the bicycle market, they switched to repairing and selling bicycles and by 1896 where making their own brand.

Many other experimenters around the world were pursuing heavier-than-air flight but with different approaches. Some treated flying as much like a boat on the water, using a vertical rudder for steering. Others felt that humans couldn’t react fast enough to gusts of wind and tried instead make the craft inherently stable, for example, using dihedral wings. The Wright brothers disagreed with this and wanted to give the pilot full control.

The 1899 kite with wing warping
The 1899 kite with wing warping

They observed that birds turned by changing the angles of the ends of their wings, causing their bodies to roll and change direction. They also felt this would help recover from tilting sideways due to side winds. One day Wilbur was idly twisting a long inner tube box in their bicycle shop when they realized they could twist the ends of their wings in a similar fashion, what they called “wing warping”.

They proceeded with development in 1899 by building a bi-plane kite, or “double decker” as the Wrights called it, with a five-foot wingspan. It was tethered with lines from each wingtip going to control sticks. Rotating the control sticks in opposite directions would suitably warp the wings, making one side of the kite dip while the other rose, resulting in roll and a turn.

Cross section of a wing with camber
Cross section of a wing with camber

While some inventors at the time considered flat planes for wings, they decided to go with a camber for theirs, a wing with a curved upper surface, a concept that had first been talked about in scientific terms 100 years earlier by Sir George Cayley.

The brothers then looked for a location with a fast, steady wind for their next series of tests and settled on Kitty Hawk, North Carolina, far from their home in Dayton, Ohio. For free gliding, they also made use of Kill Devil Hills, a set of sand dunes up to 100 feet high just 4 miles to the south of the town of Kitty Hawk.

The 1900 glider
The 1900 glider

Their glider had a horizontal elevator in front of the wings, a feature also known as a canard and something that was fairly unique to the Wright brothers’ planes. Their main purpose behind the horizontal elevator was to help keep the glider’s center of pressure at the same location as its center of gravity. Without that a plane couldn’t maintain horizontal flight. The elevator could also be tilted back and forth for angling the glider upward or downward to control ascent and descent. Tests in 1900 showed that the elevator worked well.

However, they got less lift from the wings than they expected. They returned in 1901 with a glider that had a larger wing area in hopes of getting more lift. However, they still got less lift than expected, even after making modifications to the curvature of the wings.

Fixing Lift — DIY Wind Tunnel

Initially they had designed the wings with the aid of data from Otto Lilienthal, another experimenter who’d done a lot with gliders, as well as a lift equation that had been in use for over a hundred years. The Wrights suspected both sources. The lift equation was:

lift = kSV2CL

where k is the coefficient of air pressure, also known as the Smeaton coefficient, S is the total area of the lifting surface, V is the relative wind velocity, and CL is the coefficient of lift.

Over the years there were a large number of possible values for the Smeaton coefficient, 0.0054 being the most popular. Using data from their flights with kites and the glider, the Wrights calculated a value of 0.0033 instead, close to a value which Langley, another aviation pioneer was using.

Model wing test using a bicycle Wind tunnel replica

After returning to Dayton, to test the accuracy of Lilienthal’s data, they mounted a bicycle wheel horizontally to the front of a bicycle. Attached to the wheel was a model wing mounted vertically on an axis whose angle relative to the oncoming airflow could be adjusted. Ninety degrees to that was a flat plate with its face facing the oncoming airflow, there to create drag. They rode this bicycle at a constant velocity when the surrounding air was near-calm. The goal was to find an angle for the wing that would exactly counteract the drag of the plate, at which time the wheel would not rotate. Lilienthal’s data indicated that the angle should have been 5 degrees but they found it was around 18 degrees.

They then made a wind tunnel consisting of a sixteen inch square box that was six feet long. They made two balances to go inside and to which miniature wings could be mounted. One balance tested lift and the other tested drag. The wind tunnel had a window in the top for looking down at the balances in action and for taking measurements.

From the wind tunnel tests they came up with a new Smeaton coefficient. They also concluded that Lilienthal’s data was fine, but applied only to the wing shape and curvature that he’d tested with.

The 1902 glider
The 1902 glider

In 1902 they returned to Kitty Hawk with new wings. These wings had a longer wingspan and were narrower. They were also flatter, having less of a camber. All this was based on the best of their wind tunnel test results. They’d also replaced the bulky, rectangular elevator with a smaller ellipsoidal one.

All their painstaking wind tunnel tests paid off and in the first day’s testing they flew it successfully as a kite with the tethers almost vertical. That was followed later that day by twenty-five equally successful glides.

Controlled Flight At Last

Another problem they ran into in 1901 was that the glider would sometimes turn in the opposite direction expected during the wing warping tests. This later became known as adverse yaw. To counter that their 1902 glider had two fixed, vertical tails, each six feet tall.

They did still have problems though, the last big one was what they called “well-digging”. If the wings were low on one side (the right wings for example) and the pilot was slow to correct it, gravity would take over and the glider would begin sliding to the right. This then added air pressure on the right side of the fixed tail, which would cause the tail to move to the left and the right wingtip to move through the sand with a screwing motion, hence “well-digging”. The solution they came up with was to replace the fixed tails with a single, hinged vertical tail that was tied to the wing warping system that would swivel in a way that aided the wing warping in lifting the wing.

With the well-digging gone, they felt they had what they needed in order to patent a three-dimensional system of airplane control. They were also ready to move on to the next step, powered flight.

Powered Flight

A Wright engine, serial no. 17, circa 1910
A Wright engine, serial no. 17, circa 1910 at the New England Air Museum

The Wrights were unable to convince a motor manufacturer to make a motor to their specifications so they had to make their own. Together with Charles Taylor, a mechanic and machinist who worked for the Wrights in their bicycle shop, they designed and built their motor using an aluminum crankcase cast in a local foundry, and a crankshaft made of high-carbon tool steel. It had four cylinders and the fuel was gravity fed.

For the propellers their research didn’t turn up any formulas and so they had to theorize on their own. They decided that propellers were essentially wings that rotated in the vertical dimension, and so were able to use their wind tunnel data to design them. They made them a little over eight feet long out of three glued laminations of spruce. They concluded that the propellers were 66% efficient but modern tests indicate they were an even more impressive 75%.

The propellers were mounted behind the wing as pushers so as to not interfere with the airflow around the wings. Chains connected them to the motor. They knew that bicycle chains wouldn’t be strong enough to turn the propellers so they used chains manufactured for automobile transmissions. To keep the chains from flapping they ran them through metal tubes. And to make them counterrotate, one of the chains was made to traverse a figure eight path.

In addition to adding power, numerous other tweaks were made resulting in the Wright Flyer I.

Test Flights

In 1903 they successfully tested the Wright Flyer I near Kitty Hawk, but after its fourth flight it was severely damaged when a powerful gust flipped it over multiple times, ending that year’s tests. This is the one that now hangs in the Smithsonian in Washington, D.C.

Wright Flyer II flying circles in 1904
Wright Flyer II flying circles in 1904

In 1904 they built the Wright Flyer II but from then on did testing at Huffman Prairie, a cow pasture near Dayton. Perhaps the highlight of this was the first flight ever of a complete circle by a manned heavier-than-air powered flying machine. 1905 brought the Wright Flyer III with more improvements, including enlarging the elevator and tail and moving them further from the wings. They also disconnected the tail from the wing warping system and gave independent control of it to the pilot, in keeping with their approach of giving the pilot full control of all three axes. Stability and control were greatly improved, culminating in a 24.5 mile flight in 38 minutes and 3 seconds, landing when the fuel ran out.

They were ready to bring it to market.

But if you think that flight’s pioneering period just after 1900 was the only time home-grown inventors could expand flight’s horizons you’d be wrong. The story of [Paul MacCready], his son and the many others who worked on the Gossamer Condor and human powered flight to win the Kremer Prize in 1977 has all the hallmarks for the Wright Brother’s story you’ve just read. More recently a group that call themselves Aerovelo along with help from various other organizations, won the Sikorsky Prize for a human powered helicopter, Atlas, in 2013. Clearly there’s still room for pioneering.

Filed under: Engineering, Featured, transportation hacks

Anatomy Of A Digital Broadcast Radio System

What does a Hackaday writer do when a couple of days after Christmas she’s having a beer or two with a long-term friend from her university days who’s made a career in the technical side of digital broadcasting? Pick his brains about the transmission scheme and write it all down of course, for behind the consumer’s shiny digital radio lies a wealth of interesting technology to try to squeeze the most from the available resources.

In the UK, our digital broadcast radio uses a system called DAB, for Digital Audio Broadcasting. There are a variety of standards used around the world for digital radio, and it’s fair to say that DAB as one of the older ones is not necessarily the best in today’s marketplace. This aside there is still a lot to be learned from its transmission scheme, and from how some of its shortcomings were addressed in later standards.

Channels and Capacities

The spectrum of a wideband FM broadcast transmission, on 93.9 MHz.
The spectrum of a wideband FM broadcast transmission, on 93.9 MHz.

You will all be used to analogue broadcasting on AM and FM, in which each station has its own transmitter and occupies its own frequency. With a digital system like DAB each transmitter does not restrict itself to only one station, instead it transmits several at once in a multiplex. Each multiplex has a data rate of just under 1.2 Mbits/s, which in practice allows it to carry around ten MP2-compressed stations depending on the data rates of each individual station. It’s difficult to state a hard and fast figure for the channel capacity of a multiplex, because not only can different sample rates be used for each channel, those rates can be changed on the fly.

dab_logo-svgThe British multiplexes are transmitted in the spectrum once occupied by the upper set of the old British 405-line TV channels around 200 MHz. However they are not modulated onto an RF carrier in the same way as a traditional analogue radio or TV station is. To understand why this is the case, imagine for a minute that you had a serial port with a 1.2Mbit/s data stream on it. If you were to feed the stream to a traditional modulator on an analogue transmitter, you’d have a transmitted bandwidth of just over 1.5 MHz. In an idealised free space environment that would make a passably good broadcast system, but to see why it would not work in the real world just think for a moment about watching analogue TV with an inadequate antenna.


Our ultra-high-budget simulation of analogue TV ghosting.
Our ultra-high-budget simulation of analogue TV ghosting.

Sometimes on your TV in the analogue days you would see a second “ghost” image, a faint clone of the main image overlaid to the right of it. This was the result of the transmitted signal taking multiple paths to your receiving antenna, the main image being via the direct path and the “ghost” image being a path via a reflection from an object such as a tall building or a passing aircraft. The distance on the screen between the real image and its ghost represented the time difference between those two radio paths.

Now imagine that high-speed digital data stream again, only instead of in idealised free space put it in a real-world situation with passing aircraft, and all that ghosting. The time difference between the real stream and its ghost is now very significant compared to the length of an individual data bit, and thus overlaying the ghost on the original stream has the effect of causing huge errors in the received data stream. Clearly some means of combatting this problem is required.

Many Little Channels

The answer comes in the form of increasing the length of the data stream bits such that the ghost time difference is no longer significant in relation to it. Simply lengthening the data bits of the stream would reduce the data rate to the point of uselessness, so they instead split the one single high data rate stream into many individual low data rate streams with much longer bit lengths.

Part of the spectrum of a DAB transmission, at 210.9 MHz. The individual carriers can clearly be seen.
Part of the spectrum of a DAB transmission, at 210.9 MHz. The individual carriers can clearly be seen.

That single carrier with an over 1.5 MHz bandwidth then becomes over 1500 individual carriers each with a 1 KHz bandwidth, and each of those carriers has a low enough data rate for the ghosting to no longer be a problem. The overall data rate is the same, as is the overall spectrum bandwidth. but the resistance to ghosting has been improved enormously. It also has the handy effect of improving the resistance to typical narrow-band RF interference, because a certain number of the individual carriers can be lost without exceeding the ability of the error correction to compensate for it.


Splitting the stream into multiple carriers in this way is referred to as COFDM, or Coded Orthogonal Frequency Domain Multiplexing, and since each carrier is phase modulated by the four 90-degree-apart quadrature vectors the modulation scheme is referred to as DQPSK, for Differential Quadrature Phase Shift Keying. Yes, the linguistic influence of [Samuel Morse]’s key finds its way into digital broadcasting.

Sounds Like Mud

Of course, the nature of the RF side of DAB and other similar transmissions is only half the story There is the compression algorithm and the error correction algorithm, which define the real-world characteristics of the standard. DAB in particular is notorious for poor performance under low signal conditions, in which the signal can dissolve into a sound that is colloquially described as “like boiling mud”. Other countries have either abandoned their DAB rollout or gone straight for a more recent standard such as DAB+. That’s the price Brits pay for their country being an early adopter.

So why does DAB have this poor performance compared to its successor? According to my friend as we cracked open another couple of San Miguels cooled by the frosty night outside the window, the secret isn’t in its use of MP2 rather than AAC, but in the error coding scheme. The designers of DAB tried to shape the standard so that the components they considered most important to the intelligibility of the received audio were protected. They thus put a weighting in the error coding scheme towards certain frequencies, and it seems this is responsible for the flaw because it left it more vulnerable at the other frequencies. The resulting degradation in quality becomes much steeper as the  percentage of the stream that is lost rises, to the extent that the system is quickly rendered unusable.

We all pick up in-depth knowledge of the systems and technologies we work on during our careers. I knew my friend worked in this line, and this was a fascinating opportunity to gain some understanding of a system about which I had a basic grasp but didn’t really know what made it tick. It’s this kind of information-sharing that’s so valuable, while little may come of my new-found understanding of DAB there is a lot to be said for accruing technical knowledge for its own sake. If you find yourself hanging out with a friend from way back, make sure you ask them about their specialities, you might learn something interesting.

DAB radio header image: Yisris (CC BY-SA 2.0) via Wikipedia Commons.

Filed under: Engineering, Featured, radio hacks

Did a Russian Physicist Invent Radio?

It is said that “success has many fathers, but failure is an orphan.” Given the world-changing success of radio in the late 19th and early 20th centuries, it’s no wonder that so many scientists, physicists, and engineers have been credited with its invention. The fact that electromagnetic radiation is a natural phenomenon that no one can reasonably claim to have invented sometimes seems lost in the shuffle to claim the prize.

But it was exactly through the study of natural phenomena that one of the earliest pioneers in radio research came to have a reasonable claim to at least be the inventor of the radio receiver, well before anyone had learned how to reliably produce electromagnetic waves. This is the story of how a Russian physicist harnessed the power of lightning and became one of the many fathers of radio.

Alexander Popov. Source: Wikipedia (public domain)

Alexander Stepanovich Popov was born in 1859 in the Ural mountain mining town of Krasnoturyinsk. Expected to follow in his father’s footsteps and become a priest, he instead chose to study the natural sciences and enrolled in the St. Petersburg University in the physics department.

After graduating and winning an appointment as an instructor at the Imperial Russian Navy’s Torpedo School in 1883, he turned his attention to electrical phenomena. The late 19th century was an exciting time in electrical research, when James Clerk Maxwell’s elegant equations predicting electromagnetic waves were just starting to be explored. It was a time when great minds like Heinrich Hertz, Oliver Lodge, and J.C. Bose were all working with the latest tools and instruments to probe the mysteries of Maxwell’s work.

The primary tool for detecting radio waves at the time was the coherer. Invented by Lodge based on the observation by Edouard Branley that powdered metal could conduct electricity after being exposed to electromagnetic waves, the coherer was a simple tube filled with iron filings between two electrodes. Initially, the resistance across the electrodes was relatively high thanks to the loosely packed powder and oxide coatings on each grain. A passing radio wave would cause the grains to almost weld together — sometimes sparks were reported coming from the coherer tube — which lowered the resistance enough to conduct electricity. Lodge had used his coherer to detect “Hertzian waves” in 1894, shortly after the death of their namesake.

The world's first radio receiver. Source: ITU News
The world’s first radio receiver. Source: ITU News

In his Naval School lab, Popov read of Lodge’s discovery and decided to explore it further. Being of a naval bent, he was concerned with the weather and atmospheric phenomena, and wondered whether a coherer could detect the electromagnetic signature of lightning. He set about building his own coherer, improving the design by building in an automatic decoherer.

A coherer is a one-shot device: once it detects a signal, it needs to be mechanically restored to the high resistance state by tapping to release the adhered metal granules. Popov’s decoherer was cleverly coupled to the bell used to signal a detected wave; once the clapper had struck the bell it would spring back to rest after tapping the coherer tube to jostle its contents.

Another Popov innovation was the addition of a pair of chokes on either side of the coherer to prevent strong AC signals from coupling with the DC circuits of the detector. Popov is also credited with the first legitimate radio antenna — he connected a long wire antenna to the coherer and, critically, attached the other end of the coherer to an earth ground.

On May 7th, 1895, Popov demonstrated his “storm indicator” to the Russian Physical and Chemical Society. How exactly he got Mother Nature to cooperate and produce a detectable lightning bolt during the demonstration isn’t clear; we can only assume a spark gap was used to simulate lightning for the gathered scholars. Popov did perform more experiments later that summer and managed to detect lightning some 20 miles distant, though, and managed to improve the world’s first radio receiver.

The potential value of his invention was not lost on him. He ended a paper written in early 1896 with a prediction that his receiver would form half of a complete wireless communication system “if only a source of such vibrations [radio waves] can be found possessing sufficient energy.” A few months later in March he had succeeded in doing just that with a transmitter powerful enough to reach his receiver 800 feet away. Unfortunately for Popov, Guglielmo Marconi had been working along similar lines and in June 1896 filed a patent for his radiotelegraph system. Lacking any documentation of his March demonstration, Popov could only protest Marconi’s claims and carry on.

Battleship General-admiral Apraksin, whose crew was rescued using Popov’s wireless. Source: Wikipedia (public domain)

Popov’s naval employers took interest in his system and allowed him to start experimenting with ship-to-shore communications. By 1900 he had established a wireless station on an island in the Gulf of Finland that would process hundreds of official ship-to-shore messages and play key roles in the rescue of a stranded battleship and later fifty fishermen adrift on an ice floe.

It would seem that although Marconi was first to patent and will always be remembered as “The Father of Radio,” Popov played a critical role in the engineering of radio. He demonstrated the first receiver, developed the decoherer, invented the first practical antenna, probably conducted the world’s first wireless communication, and certainly used radio for the first time in a sea rescue. That’s a fair number of firsts in a time when they were being racked up at a furious pace, and not a bad legacy to leave. Nor are the fact that May 7th is celebrated as Radio Day in Russia, and that the International Telecommunications Union (ITU) has a huge conference room in their Geneva headquarters named after him.

Filed under: Featured, History, radio hacks

Explosions that Save Lives

Normally, when something explodes it tends to be a bad day for all involved. But not every explosion is intended to maim or kill. Plenty of explosions are designed to save lives every day, from the highway to the cockpit to the power grid. Let’s look at some of these pyrotechnic wonders and how they keep us safe.

Explosive Bolts

The first I can recall hearing the term explosive bolts was in relation to the saturation TV coverage of the Apollo launches in the late 60s and early 70s. Explosive bolts seemed to be everywhere, releasing umbilicals and restraining the Saturn V launch stack on the pad. Young me pictured literal bolts machined from solid blocks of explosive and secretly hoped there was a section for them in the hardware store so I could have a little fun.

Pyrotechnic fasteners are mechanical fasteners (bolts, studs, nuts, etc.) that are designed to fail in a predictable fashion due to the detonation of an associated pyrotechnic device. Not only must they fail predictably, but they also have to be strong enough to resist the forces they will experience before failure is initiated. Failure is also typically rapid and clean, meaning that no debris is left to interfere with the parts that were previously held together by the fastener. And finally, the explosive failure can’t cause any collateral damage to the fastened parts or nearby structures.

Explosive bolt. Source: Ensign-Bickford Aerospace & Defense
Explosive bolt. Source: Ensign-Bickford Aerospace & Defense

Pyrotechnic fasteners fall into two broad categories. Explosive bolts look much like regular bolts, and are machined out of the same materials you’d expect to find any bolt made of. The explosive charge is usually internal to the shank of the bolt with an initiating device of some sort in the head. To ensure clean, predictable separation, there’s a groove machined into the bolt to create a shear plane.

Frangible nut and booster, post-use. Source: Space Junkie's Space Junk
Frangible nut and booster, post-use. Source: Space Junkie’s Space Junk

Frangible nuts are another type of pyrotechnic fastener. These tend to be used for larger load applications, like holding down rockets. Frangible nuts usually have two smaller threaded holes adjacent to the main fastener thread; pyrotechnic booster charges split the nut across the plane formed by the threaded holes to release the fastener cleanly.

“Eject! Eject! Eject!”

Holding back missiles is one thing, but where pyrotechnic fasteners save the most lives might be in the cockpits of fighter jets around the world. When things go wrong in a fighter, pilots need to get out in a hurry. Strapping into a fighter cockpit is literally sitting on top of a rocket and being surrounded by explosives. Most current seats are zero-zero designs — usable at zero airspeed and zero altitude — that propel the seat and pilot out of the aircraft on a small rocket high enough that the parachute can deploy before the pilot hits the surface. Dozens of explosive charges take care of ripping the aircraft canopy apart, deploying the chute, and cutting the seat free from the parachuting pilot, typically unconscious and a couple of inches shorter from spinal disc compression after his one second rocket ride.

Behind the Wheel

There’s little doubt that airbags have saved countless lives since they’ve become standard equipment in cars and trucks. When you get into a modern vehicle, you are literally surrounded by airbags — steering wheel, dashboard, knee bolsters, side curtains, seatbelt bags, and even the rear seat passenger bags. And each one of these devices is a small bomb waiting to explode to save your life.

When we think of explosives we tend to think of substances that can undergo rapid oxidation with subsequent expansion of hot gasses. By this definition, airbag inflators aren’t really explosives, since they are powered by the rapid chemical decomposition of nitrogenous compounds, commonly sodium azide in the presence of potassium nitrate and silicon dioxide. But the difference is purely academic; anyone who has ever had an airbag deploy in front of them or watched any of the “hold my beer and watch this” airbag prank video compilations will attest to the explosive power held in that disc of chemicals.

When a collision is detected by sensors connected to the airbag control unit (ACU), current is applied to an electric match, similar to the engine igniters used in model rocketry, buried within the inflator module. The match reaches 300°C within a few milliseconds, causing the sodium azide to rapidly decompose into nitrogen gas and sodium. Subsequent reactions mop up the reactive byproducts to produce inert silicate glasses and add a little more nitrogen to the mix. The entire reaction is complete in about 40 milliseconds, and the airbags inflate fully within 80 milliseconds, only to deflate again almost instantly through vent holes in the back of the bag. By the time you perceive that you were in an accident, the bag hangs limply from the steering wheel and with any luck, you get to walk away from the accident.

Grid Down

We’ve covered a little about utility poles and all the fascinating bits of gear that hang off them. One of the pieces of safety gear that lives in the “supply space” at the top of the poles is the fuse cutout, or explosive disconnector. This too is a place where a small explosion can save lives — not only by protecting line workers but also by preventing a short circuit from causing a fire.

Cutouts are more than just fuses, though. Given the nature of the AC transmission and distribution grid, the lines that cutouts protect are at pretty high voltages of 11 kV or more. That much voltage means the potential for sustained arcing if contacts aren’t rapidly separated; the resulting plasma can do just as much if not more damage than the short circuit. So a small explosive cartridge is used to rapidly kick the fuse body of a cutout out of the frame and break the circuit as quickly as possible. Arc suppression features are also built into the cutout to interrupt the arc before it gets a chance to form.

[Big Clive] recently did a teardown of another piece of line safety gear, an 11 kV lightning arrestor with an explosive disconnector. With a Dremel tool and a good dose of liquid courage, he liberated a carbon slug from within the disconnector, which when heated by a line fault ignites a .22 caliber charge similar to those used with powder actuated fastener tools. The rapid expansion of gasses ruptures the cases of the disconnector and rapidly breaks the circuit.


We’ve covered a few of the many ways that the power of expanding gas can be used in life safety applications. There are other ways, too — snuffing out oil field fires comes to mind, as does controlled demolition of buildings. But the number of explosives protecting us from more common accidents is quite amazing, all the more so when you realize how well engineered they are. After all, these everyday bombs aren’t generally blowing up without good reason.

Filed under: Featured, Interest, Original Art

Make Logic Gates out of (Almost) Anything

Logic gates are the bricks and mortar of digital electronics, implementing a logical operation on one or more binary inputs to produce a single output. These operations are what make all computations possible in every device you own, whether it is your cell phone, computer, gaming console etc.  There are myriad ways of implementing logic gates; mechanically, electronically, virtually (think Minecraft), etc. Let’s take a look at what it takes to create some fun, out-of-the-ordinary gate implementations.

How they work

As an example, let’s consider the AND gate (the others are OR, NOT, NAND, NOR, XOR and andgateXNOR). Electronic gates operate on two nominal voltages, normally 0 V and 5 V, representing the logic 0 and the logic 1, respectively.

The AND gate has two inputs A and B. The output of the gate, A.B, depends on the two inputs according to the truth table at the right. The AND gate has a “1” output only when both A and B are 1. As you can guess, the OR gate has a 1 output when A or B are 1, and 0 only when both A and B are 0.

Every gate has its own truth table. Although these seven gates are normally considered the “basic” gates, there are some gates, such as the NAND gate, that are universal, meaning that they can be interconnected to construct all other gates.

Combining Gates

5Logic gates can be combined to perform any computation. There are millions of them in a computer chip. But let’s see a very simple application of the AND gate.

At the left is a schematic for an automatic thermostat. The heater must turn on if the water is cold, but only if there is enough water in the tank. Contacts X and Y are near the top of the tank and if they are covered with water a signal is sent to one of the gate’s inputs. The thermistor senses the water temperature, and if it is cold enough, a signal is sent to the other gate input. Therefore the heater goes on when there is sufficient water and the temperature is cold enough.

Other Gate Implementations

Contemporary logic circuits use MOSFETs as the elements to build gates, but there are many ways to implement them. Using relays is one of them, and you can literally see how they work. [Andrew Kingsolver] has done an excellent job of explaining relay-based logic. The image below show his implementation of the AND and OR gates. A quick analysis of the circuits reveals how the truth tables are obtained from the inputs, represented by the two switches (energizing the coil in the relay will push the contacts to the far poles).

relay-and-gate relay-or-gate

In order to understand how arithmetic can be done with logic gates, the half adder is a good example. [Andrew Kingsolver] has done that as well. It takes two inputs (0 or 1) and outputs a sum and a carry, in binary of course. This can be implemented with an AND gate and an XOR gate. As you may know, early computers were relay-based, and you can even build one just for fun, if you have the time.

Vacuum tubes replaced relays as the main elements for computation. A simplified circuit that implements the NOR logic gate, which only outputs 1 when both inputs are o, is show at the right. The tube grids are the inputs. There is no output when both grids are low, since the current flows to ground. When voltage is applied to both grids, the tubes become basically an infinite resistance, and a current flows to the output.

As with relays, all kinds of logic circuits were made using vacuum tubes. There were 17,468 of them in the ENIAC computer. Tube circuits were bulkier and power-hungry compared to relays, but the speed of computation was much larger.

Early vacuum-tube memory modules, circa 1955. Diode AND gate by Thingmaker, CC-BY-SA 4.0

Eventually, silicon arrived, and vacuum tube logic was replaced with diode and transistor logic. This represented a dramatic increase in speed and a great reduction in size and power consumption. You guessed it, diodes can also be used to build logic gates, but not all of them, only the AND and OR gates can be built using diodes only. However, by adding a transistor as an active element, all other gates can be implemented.

legosThe three input, diode-only AND gate is shown in the picture above. When all inputs are positive, a current will flow through the resistor and pull the output positive. If any of the three inputs is at 0 volts, current flowing through the corresponding diode will pull the output voltage down to 0 volts. The other diodes would be reverse biased and conduct no current.

How about mechanical logic gates? Sure, you can use LEGO to build gates and a half adder, or a small adding machine using wood and marbles. Of course, pneumatic logic gates can also be designed (useful in places with high levels of moisture or dust).

OR gate built in Minesweeper

Some bizarre ways of building logic gates also exist. Conway’s Game of Life, perhaps the most well known cellular automaton, has been shown to be a Universal Turing Machine, meaning that anything that can be computed algorithmically can be done within the game. The game is fascinating by itself, and detailed ways of building gates in it have been described.

Another similar example comes from the Minesweeper game, popular in early versions of Windows.  This game seems innocent, but is in fact an NP-complete problem (the hardest problems to solve).

So if you have some spare time, consider building some logic gates, after all, there are many ways to do it!

Filed under: Engineering, Featured