Posts filed under 'Obsolete Technology'
2012 is the year the netbook died. Netbooks are, or were, mini laptops with a footprint similar to that of a tablet. But unlike tablets, netbooks can run either Windows or Linux operating systems and have numerous ports for SD cards, HDMI cables, and USB peripherals. I bought my first netbook, an ASUS Eee PC, around 2006. It was cheap and it ran Windows XP. I needed a MIDI capable device for sequencing, and the ASUS was a perfect fit since I could balance it on top of a keyboard. Even with a lower end processor, I was still able to use the ASUS to record multi-track audio. But late last year, my netbook finally gave out after years of use and abuse. Many keys had been ripped out by my cats and the battery was providing poor performance. I guess I’ve been out of the loop a bit on consumer electronics, because I was surprised to discover there were few options for replacing my ASUS. Up until late last year, there were only two manufacturers left making netbooks: ASUS and Acer. Few retailers stocked netbooks in early 2013, but I was able to find a new Acer for pretty cheap after a long online search.
The only real advantage netbooks had going into 2012 was price, but that quickly changed as manufacturers started to offer traditional laptops at netbook prices. I suppose I’m one of the few people out there who prized the small form factor of netbooks and didn’t mind the tiny keyboard and display. For me, the only portable device that might work as a netbook replacement would a tablet running Windows 8. Something like the HP Envy X2 might fit that bill, but it’s expensive and I’m not sure it has all the ports I would need. Of course there is the Chromebook, but all of my music apps are Windows based, so that’s not an option. I’m hoping in a couple of year ultrabooks will come down in price since I’m sure my Acer will be used and abused by then.
April 23rd, 2013
For the most part, I find Facebook to be a complete waste of time. I equate it to gold mining — you have to sift through yard after yard of dirt and rock to find that tiny fleck of precious metal. I found one of those rare nuggets last week as I was digging through the frozen tundra of Facebook. One of my “friends” posted a news article about a Kickstarter effort to create a new digital video camera. I would generally ignore this, except the headline mentioned “Digital Bolex”. Being a fan of small gauge filmmaking, this sparked my interest and I researched a bit further. It turns out there are two filmmakers who are trying to resurrect the Bolex name, but to grace a digital camera. What they are planning is not your average low-fi digi-cam though. One of the things I don’t like about my Canon T2i is the compression it applies to video. The compression is not terrible, but it puts one at a disadvantage if you plan on doing any serious post production work like color grading. The Digital Bolex will shoot uncompressed 1080p HD footage using an off-the-shelf Kodak(?) sensor roughly the size of the super 16 frame. Actually, it sounds like the footage is a sequence of RAW images instead of a tidy little .mov file like those generated by my DSLR, but turning those individual images into a video file is pretty easy. I do this all the time with the time-lapse footage I shoot. Right now it looks like the makers of the Digital Bolex are way past their fundraising goal, so I think the project will actually come to fruition. I can’t wait to see what the early adopters have to say (note: I will not be an early adopter). I the closest competition would be the A-Cam dII, which I think uses the same c-mount lens system as the Digital Bolex. However, the A-Cam dII uses a proprietary memory storage solution rather than something more conventional. The Digital Bolex uses more practical CF cards, which are relatively cheap and plentiful. There are a couple of things that don’t excite me when reviewing the Digital Bolex’s specs:
Audio Inputs: Two XLR inputs, which is great, but no phantom power. To me it seems pointless to offer XLRs without phantom power. And those inputs should also offer channel switching so either channel 1 or 2 can be routed to both channels. My DVX100 use to do this and it was a great feature I used all the time. Maybe the Digital Bolex will do this via software?
HDMI Outputs: Needed for an external LCD monitor. I think there is an adaptor that allows you to do this, but there should be a dedicated jack for a standard HDMI or HDMI mini cable.
Ergonomics: To be perfectly honest, I never really liked the way my old 16mm Bolex felt in my hands. The only saving grace was the brick-like weight. Some of the old super 8 camera makers made cameras that felt wonderful in the hand, like Eumig and Nizo. Not a big deal though, since I shot using a stabilizer rig a lot these days.
Of course the cost of the Digital Bolex won’t be cheap when they finally start building them. Sounds like they will be in the $3k neighborhood, which isn’t all that bad considering a Canon D5 Mark II DSLR runs about the same. I still have a bunch of great c-mount lens, so this camera is totally something I would love to have. But I’m not sure I’m ready to make that kind of investment for my minimal documentary work. If the audio options are improved and if an HDMI jack is added, I might consider a stretch for the camera though.
March 19th, 2012
On a recent trip to central Oregon I made a detour to Christmas Valley to visit the former Over-the-Horizon Backscatter (OTH-B) radar site — one of the last Cold War installations in Oregon. This first required permission from the US Bureau of Land Management and the Oregon Military Department, who were kind enough to allow me to explore the site. Most Oregonians have probably never heard about the AN/FPS-118 (the official Air Force designation) radar installation in Christmas Valley. The system actually had three components: the transmitter site here in Oregon, a receiver site in Tule Lake California, and an operations center at Mountain Home Air Force Base in Idaho. All three sites were connected by satellite. A similar OTH-B radar existed in Maine to serve the East Coast. At the Oregon site, there are really three separate radar installations arranged in a sort of half-moon pattern facing west. Each has wood fencing surrounding the massive 460 acre perimeter and cyclone fencing around a power station, water tank, and the lone pole-barn style building. The operation center in Mountain Home processed all the data from the three West Coast radars. If something looked suspicious on the radar returns, interceptor aircraft would be dispatched to in investigate.
Funding for the West Coast site was authorized by Congress between 1986 and 1988. Construction was completed in December of 1990 at a cost of over $300 million. In 1991, plans were on track to turn the West Coast site over to the Air Force’s Tactical Air Command for official operation. However, with the ending of the Cold War, the Air Force decided to end activities at both the East and West Coast OTH-B radar sites and both were placed into caretaker status. In the mid ‘90s, the National Oceanic and Atmospheric Administration (NOAA) began using data from the Navy’s smaller and portable OTH-B radar system (AN/TPS-71). The Air Force operated the West Coast site system briefly around this time for scientific and counter narcotics purposes, but this activity stopped in 1997 due to high operating costs. Again, the system was mothballed.
So how is OTH-B radar different from conventional radar? Well, conventional radar has always been limited in range due to the curvature of the earth. OTH-B radar gets around this problem by bouncing radio signals off the ionosphere. A small part of the signal is then reflects back to the receiver, which is called “backscatter”. The range of the OTH-B radar is anywhere from 500 to 1,800 nautical miles, much further than the conventional 250 mile maximum range of a rotating radar. The one major disadvantage of both the West and East Coast sites was the fixed 60 degree coverage. In contrast, a conventional rotating radar provides a 360 degree coverage. The Soviets also had their own OTH-B radar about a decade earlier than ours and was nicknamed the “Russian Woodpecker” by shortwave radio operators. It was shut-down around 1989, possibly because it interfered with civilian radio transmissions. Currently, the only large-scale fixed OTH-B radar site is in Australia.
The Oregon site sat unused from 1997 to 2007. Thieves took their toll as the price for metals soared during the economic boom years. In 2007 an Oregon State Trooper pulled over Peter and Andry Sharipoff of Mount Angel who were carrying 1,500 of copper wire stolen from the site. Both, not surprisingly, were also charged with meth possession. The Air Force dismantled the massive radar arrays shortly thereafter. In 2008, Lake County began exploring ways to use the site for alternative energy production. Since power transmission lines still exist at the site, the thinking was it would be easy to install solar arrays and push power out from the three sites. As of 2011, there has been no alternative energy development at the site. The Oregon Air National Guard now uses the installation for training purposes, but there doesn’t seem to be any long term game plan for the 2,500 acres of land.
I didn’t see any activity when I visited this summer — although there were a few exterior lights on. The power station was buzzing, so juice is still flowing to the site. Overall, the buildings and fencing look to be in good shape. Of course the radar arrays are long gone, but you can still see the cement footings. I should be clear though; this is still a military site and should not be entered without permission from the Oregon Military Office and the Bureau of Land Management. I suspect there are still security systems in place around the remaining buildings, so trespassing would be a bad idea. If you would like to see the site, my suggestion is to view it from the well maintained gravel roads that ring the three installations. But there really isn’t that much to see, so I’m not sure it’s worth making a trek out to Christmas Valley unless you’re a hard-core fan of Cold War infrastructure.
September 10th, 2011
NASA’s Space Shuttle program has come to an end. I’m certainly feeling nostalgic already, but unlike a handful of legislators from Florida and Texas, I feel the smart thing to do is cleanly end the program. In a previous post about NASA’s ’70s era space station Skylab, I mentioned that program was developed under the assumption a future Space Shuttle would be able to service the orbiting outpost. This was all part of a grand NASA vision of space exploration that emerged during the ’70s. After the success of the Apollo program, NASA began to plan for the next phase of launching people into space, which included ambitious plans to put astronauts into long term orbit. Unfortunately, the Shuttle program suffered numerous delays in the ’70s and the Skylab program died before a working partnership could be developed. When the first Shuttle launched in 1981, it was unclear exactly what role it would be filling within a larger space program. At first, it seemed like the Shuttle could be used to put super secret spy satellites into orbit. But after the Challenger accident in 1986, the Air Force and the intelligence community decided they didn’t want to rely on the Shuttle, so they went back to launching on traditional rockets like the Delta. Without a major sponsor like the military, the Shuttle lacked a clear purpose. It failed to deliver on the promise of making cargo hauling cheaper, so commercial satellites continued to ride into space atop conventional, and unmanned, expendable rockets. Another failure of the program was safety. The Shuttle was not originally designed with a crew escape system, unlike all of the other space vehicles previously developed. While such a system probably would not have saved the crews of the Challenger and Columbia, it was a serious short coming that was never fully addressed. Finally, the cost of the Shuttle program was far more than anyone anticipated. The work required to make a Shuttle (and solid fuel boosters) ready for launch was crazy expensive.
Despite all the shortcoming of the Shuttle program, it marked a huge technical achievement. The complexity of all the systems that had to work together for a successful mission was really amazing and ultimately, I believe, made NASA a stronger agency. However, I think that for political reasons the Shuttle program was maintained longer than it should have. We need to move forward with a newer, and simpler, manned space program. Even though I have trashed commercial ventures like Space-X in the past, they have demonstrated a growing capacity to be able to step in and offer new cargo and crew launch options. I think the future of NASA’s manned space program should be all about diversity — in a decade I anticipate there will be a number of options for putting someone into orbit. It was a bad idea to rely on a single launch platform like the Shuttle, so I hope we move away from that model.
July 16th, 2011
I haven’t used my Nikon F100 much because my old Minolta X-370 is still my go-to SLR. However, as the Minolta quickly ages (it use to be my dad’s for crying out loud), it will need to be retired at some point. I’ve decided the F100 will be my next (and last) dedicated SLR before I move on to a dSLR. I had a Sigma SLR for a while which I liked — it had a compact design and was easy to use — but I never liked Sigma’s line of optics with the exception being my DP1. Nikon, on the other hand, made/makes fantastic prime lens that are generally affordable when compared to my all time favorite Zeiss. I was originally considering an F5, but after talking with other Nikon users, they steered me to the F100 which is similar to the F5, but smaller and lighter. I’ve been using old manual E-Series Nikon lens which are ridiculously cheap and perform quite well despite the bad mouthing they get from purists. Eventually, as my eyesight fails with age, I’ll invest in some auto focus lens, but for now I’m happy focusing manually using the focus assist. Honestly, coming from my little Contax auto focus, this camera is a dream to use.
I mentioned that at some point I want to get a dSLR. That desire is being driven largely by an interest in high-def video. I’ve seen some really nice HD footage shot by those Canons, but I’ve read Nikon hasn’t quite good HD “right” in their camera. What a shame, since my lens are mostly Nikon at this point. Well, I guess I’ll just wait and see if Nikon can get it together.
September 21st, 2010
I recently finished a book written by B-Love’s friend Mac Montandon about jetpacks. It’s part history, part tale of an obsession with flying devises that can be strapped on like a rucksack. The most famous of these was the Bell Rocket Belt developed in the late ’50s - early ’60s (see it in action at the beginning of Thunderball). The Rocket Belt was somewhat simple in design, relying on hydrogen peroxide for fuel. To create the thrust needed to actually make a man fly, a single tank of nitrogen would press the hydrogen peroxide in two tanks out and into a catalyst chamber, which then created a superheated blast of steam. The Rocket Belt pilot used two hand controls to manipulate the direction of the nozzles, thus steering the Rocket Belt in the desired direction. Probably the biggest disadvantage of the design was the limited flight duration, which topped out at around 30 seconds. This curtailed the Rocket Belt’s appeal to the military, which would have been the main customer of this fanciful flying machine. But the Rocket Belt eventually led to a project that I found even more intriguing: The Bell Jet Belt.
The successor to the Rocket Belt, the Jet Belt, relied on a small kerosene powered jet engine instead of a hydrogen peroxide “rocket” engine. This new design allowed for up to 20 minutes of flight at speeds of up to 120 MPH with a range of about 20 miles. The Flying Belt was built around the W-19 bypass turbofan engine, which was started by a small explosive cartridge. The turbofan design offered a lot of power with little fuel consumption, which gave this jetpack the impressive flying time and range. Like the Jet Belt, controls were provided by means of hand-grips. Maybe somewhat similar to the Harrier jump jet, thrust from the engine was “vectored” by nozzles, giving the pilot the ability to go forward, backward, and rotate from side to side. Interestingly, the kerosene fuel was housed in clear plastic tanks that wrapped around the engine and held about six gallons.
Like the Rocket Belt, the Jet Belt was insanely noisy, making its military value limited (since it would be worthless for surveillance missions). In 1968, the Jet Belt program died and the design was sold to Williams Research Corporation, which later used an updated version of the W-19 engine for the Air Force’s cruise missile program. Another reason why the Jet Belt was probably unattractive to the military was the weight of the whole thing. Without fuel, the Jet Belt topped the scales at 124 pounds. This made it less appealing as a practical flying device that could be used in the field (although if made today, lightweight material could be utilized, like carbon fiber and titanium, to bring the weight down significantly).
While a few jetpack fans have successfully recreated Bell’s Rocket Belt, it appears only one man has tackled the far more complex Jet Belt design. Over in the UK, Richard Brown seems to be making progress on an updated version of the Jet Belt. There are still some real obstacles someone would need to overcome to make the design successful, like keeping the total weight of the jetpack reasonable and protecting the pilot from a catastrophic engine failure that could send fatal bits of fan into a pilots body, but I believe these technical hurdles could be overcome with the copious application of money and time. Of course the real question is why would the world even need a turbojet powered jetpack. The answer is of course it doesn’t fill any real need other than it would be totally cool.
March 22nd, 2010
OK, I don’t need a new project this summer, but I’ve recklessly put another one on my plate. I found the ARP Omni II pictured above on Craigslist for a pretty good price due to a couple issues I’ll mention in a minute. So why bother buying an ARP Omni II you might ask? Good question! The Omni II was a popular “string synth” from the late ’70s to early ’80s and was used by bands as diverse as Joy Division and Supertramp. The whole string synth fad came about in the early ’70s as an alternative to the Melotron, which was an expensive and cumbersome tape-based proto-sampler keyboard. The idea behind the first string synth, the Eminent, was synthesize string sounds rather than play back tape recorded strings like the Melotron. The Eminent lead to the Solina which lead to the ARP Omni I/II, which I guess used the basic design of the Solina, but added a couple of additional features. Pretty much every synth manufacture offered some kind of string synth back in the day (probably thanks to disco) including Roland, Yamaha, and Korg. The key to any of these old string synths is the built in analog chorus/phase effects — without it, a string synth just sounds kind of bland.
The Omni II is a bit of an odd beast even by ARP standards. It is basically three analog synths in one box sharing a common keyboard. It’s neither monophonic (meaning it plays one note at a time) or polyphonic (meaning it plays chords) but paraphonic which means that it is capable of playing all the notes on the keyboard using something called divide down technology. Polyphonic synths were quite rare in the ’70s. On of the few was the the Prophet 5, but it was also quite expensive. These string synths used the same technology as transistor based organs used allowing for chords but cost far less than standard polyphonic synths.
So what does my Omni need in the way of repairs? Well for one, all the ‘E’ notes sustain when I play the string sounds, which means there is probably a blown capacitor on one of the circuit boards. This is a pretty common issue with a lot of ARP synths from the ’70s. I guess the Omni was the product that kept company executives in cocaine and champagne, but to maximize profits, I think they cut corners wherever they could — like using cheap capacitors prone to failure. I’m also gonna need to either clean or replace all the sliders. I think someone tried to clean them with WD40 or similar, so now they’re pretty much shot. Right now I’m leaning toward replacing, since pretty much all the parts used on the Omni are still being made. Below are some links for other Omni owners to use when sourcing parts.
Arp Omni II Switches
Arp Omni II Chorus Phaser Chip
Arp Omni II Synth and Bass Voice Chip
Arp Omni II Voicing Circuit Chip
Arp String Ensemble Bass Section OP Amp
Arp String Ensemble Knobs
ARP Omni II Problems
Hear the Omni II in action on one of the best songs ever…
Love Will Tear Us Apart - Joy Division
June 30th, 2009
My first CD was either Pink Floyd’s Momentary Lapse of Reason or New Order’s Substance. The only one I was able to find the other day was Substance and it was missing the second disc. If memory serves, this CD was a gift from my parents around Christmas 1988 — making the disc over 20 years old. This album was played endlessly through my high school, college, and graduate school days. So how has it held up? Not so great. It’s scratched to the point of making some songs unplayable. But in terms of the structural integrity of the disc itself, it seems to be in pretty good shape. The disc hasn’t warped and there is no discoloration or signs of degradation from chemical instability. So I don’t think people need to start worrying about their CDs not being playable in 50 years. With a little care in handling, I think any CD should be fine for the long haul. Don’t know about CDRs though. I won’t be able to report on burnable CDs for another 15 years, but I’m not optimistic, since organic dyes are used and are more likely to become unstable over time.
I’m still a fan of CDs in the day and age of downloadable digital music. I like stuff, so the idea of downloading music has never caught on with me. That being said, I’m not the kind of consumer the music industry should be pinning their hopes on, since nearly all of the CDs I purchase these days are used.
February 23rd, 2009
A few regular readers might remember some old posts where I talked about my attempts to create new discs for the Mattel Optigan organ. I won’t rehash the detailed explanation of what the Optigan is, but in a nutshell, it’s a cheesy home organ that hasn’t been in production for about 35 years. To produce the sounds, audio loops are read from a transparent sound disc. Various discs covered specific musical genres, like country, folk, or polka just to name a few. I’ve always wanted to make my own discs with my own sounds, but technical obstacles always got in my way. I was able to scan existing discs and print copies, but the fidelity was not all that great and it was difficult to punch out the center hole of the disc. Making a disc with brand new sounds presented even more challenges, since software was involved. But some folks with far more technical expertise have managed to do what I never was — create a new Optigan disc. Go here to see a demo. I’m super excited about this development and I’m eagerly awaiting the opportunity to buy a new disc.
October 11th, 2008
I don’t know why I’m so attracted to obsolete technology. Maybe it’s ingrained in my American DNA to root for the underdog. And what a glorious underdog the MiniDisc (MD) is. Introduced in 1992 by Sony, it was a digital alternative to the more expensive DAT format. But Sony has a history of poor marketing when it comes to new media formats (Betamax, Hi-8, etc.) and the MD was no exception. Sony tried to market the MD as a replacement for the CD, but it would have made more sense to sell it as a replacement for the cassette tape, something Philips had tried unsuccessfully with the Digital Compact Cassette (DCC) format.
MD limped along for the rest of the ‘90s before finding its niche, which was affordable digital recording. Because MD cartridges are so small, about 2 inches across, the player/recorders are also small – especially compared to DAT. In 2000, Sony upgraded the MD format to fit more data on a disc using digital compression, allowing more recording time. But Sony totally dropped the ball when the iPod was released by Apple in October of 2001. Instead of allowing the option of playing back MP3s like on the iPod, Sony forced MD users to encode their music files in their proprietary ATRAC format. Finally in 2004, Sony introduced Hi-MD which offered MP3 compatibility and uncompressed audio recording, but they still didn’t have anything like iTunes for users to download music easily (SonicStage=crap!).
I still use my MD player/recorder for a couple of reasons. First, it runs on a single AA battery. If you’re traveling overseas, having something that takes a standard battery and doesn’t need recharging is a plus. Second, the MD is an unobtrusive recording device. I’ve used it for wild sound when filming super 8 and it works great for that. Third, MD player/recorders are cheap. I paid $150.00 new for mine. If I lost my MD, it would be easy and cheap to replace. Would I ever pick my iPod over MD? Well, not until iPods become cheaper and offer easy digital recording. For now, I’ll stick with MD.
August 22nd, 2007