Security in Wireless Networks 58
Asmodeus writes "Those boys at Cambridge have done it again. The Resurrecting Duckling (where do they get these names ?) is a description of the security problems in ad-hoc
wireless networks with some nifty solutions to the problems." Its a really interesting techie bit actually. Talks about problems with low power, wireless boxes. Its strange to think that in wireless world, for example a denial of service attack could be anything designed to drain your battery.
Re:Bluetooth and 3GPP (Score:1)
Real WLANS's using Direct Sequence Spread Spectrum technology have an RF signal level typically below ambient noise level. So first you have to find the signal. Then you need to rebuild the signal - which is built using up to 51bit encryption. Then you need to have an SSID which is relevant and an IP address which hasn't been denied.
Hacking that way is tricky.
Trying to drain battery life is also very difficult - the only sensible way is to take interference levels so high that nothing gets through. You've got to really try hard to do this!!
I don't think we have much to worry about.
Re:cryptography is questionable (Score:1)
Since the quantum method ensures, that only sender and recipient of a message know this one-time-pad, and 'sniffing' (=measurement) of the transmitted photons leads to errors and thus notification of a compromised (quantum-) line, you can then use the one-time-pad to transmit your message via any regular line you choose.
Re:Bluetooth and 3GPP (Score:1)
could use the broadcast power inherent in
a sleep deprivation attack to re-charge
its batteries...
Re:Err, what planet are you from?-Back atcha. (Score:1)
Re:reanimating ducklings may work (Score:1)
This is all pretty standard cryptography, and lightweight enough to put in a single-chip embedded system.
Re:Err, what planet are you from?-Back atcha. (Score:1)
However; A CPU's 'power' is the instructions it performs in a given amount of time. The only way to waste computational power is to spend some cycles not executing an instruction or to execute unnecessary instructions. So, which are you saying is happening? And what is your technique for harnessing this 'missing' 99% of our power?
Re:How about Earth? :) :) (Score:2)
However, 100% is still only a x2, not a x100. The x100 is often quoted as the degree of idleness of a machine used for desktop WPing, but it's certainly not a typical figure.
Poor coding exists, but it's certainly not THAT bad. Linux utilizes the CPU better than Windows, true, but it's code still is a long way from optimal. The TCP stack needs work, for example - the BSD stack is certainly faster, and that's still by no means perfect.
But poor coding isn't the only factor. Linux is designed to be multi-platform, and generic code will ALWAYS be slower than tightly-written, heavily optimisd routines. That's the nature of the beast. You can't be both generic AND take advantage of every little trick a specific CPU or device may have.
Just like my garage door (Score:3)
Maybe I'm missing something, but the idea of imprinting sounds a lot like what my garage door opener does now.
I've got one of those rolling code models [sears.com] from Sears where you have to hold the opener to the remote while pushing a button on each. The door can then be opened by the remote which itself can be programmed to handle three different openers. (Maybe you have more garages than I do. *shrug*). Seems to me that it fits the model discussed here a bit.
Can someone let me know if I've got this or not?
Re:cryptography is questionable (Score:1)
Re:Interesting? Security? Not (Score:1)
A strong solution would be to build a Faraday cage and do the imprinting in it.
Err, what planet are you from? (Score:2)
In practice... well, unlike the imaginary Turing machine, CPU's do not have an infinite amount of memory. If you take an ordinary 8088 and its motherboard, you just can't -have- more than 720K of memory. It's not possible.
Now you -could- put some memory sockets on an ISA card and write a protocol to utilize that memory, etc, but a complete 8088 machine is limited by memory.
With those memory limitations, it is not possible to perform calculations that take up more memory than that!
Unless, of course, we use virtual memory... so... now, with disk-access speed memory, we write a Win95 Emulator (never mind the difficulties in context switching and utter lack of runtime security caused by the lack of a protected mode) and now we're off! We download and install Quake V (it's been a few years since we started this project, you see, a couple more versions came out) and run at... 1 frame per week.
Maybe we try something less ambitious... printing out a pdf document... at one page every six hours.
No, I'm sorry, we utilize -much- more than 1% of our computing power in -many- everyday tasks, and what is theoretically 'possible' with older CPUs is -not- the same as what is feasible.
Besides which,
Obviously, -NO- OS is secure against actual tampering with the hardware directly. After all, the tamperer could always replace the OS with his own boot disk. But that, too, has little to do with the difference between 8088 and Pentium III...
Re:cryptography is questionable (Score:1)
I agree, but it seems rather obvious. (Score:1)
Sounds like you love to tinker as much as I do. Making an old hard drive play jingle bells by manipulating registers in the controller chip and other such things.
I agree that hardware knowledge will play an important role in the security battleground of the future; However, I don't think it has been, or currently is as limited as one might imply from your comments.
The real hackers out there are folks who have an overwhelming desire to know how everything works. To them, your comments might seem more like statements of the obvious rather than commentary about a subject you obviously enjoy. Hacking has always included hardware knowledge as a big part of it and it always will.
A perfect example would be boxing (anyone thinking I'm referring to a sport stop reading here). The people who came up with most of those ideas weren't interested in any sort of fame, they just wanted to know how things worked. That was almost 100% hardware-based, and included knowledge that can be applied to today's electronics quite easily. Fundamentally things haven't changed very much... which was another point of yours.
So... I agree, but I also think many so-called security experts miss the point that vulnerabilities have always existed at the hardware level. It's not just the future we're talking about here. IMO that's as much a fact now as it ever will be.
IPv6 does not apply to this problem. (Score:2)
The first constrain on the system is that of a "peanut CPU". "The consequences of [this contraint] is that, while strong symmetric cryptography is feasible, modular arithmetic is difficule and so is strong asymmetric cryptography." Because of this, these devices cannot use IPv6. In general, the specification clearly shows why conventional solutions to these problems do not apply to these classes of devices.
The power of open source (Score:2)
I'm very interested in wireless and like the authors of this paper, I think it will be very important in the coming years. But I've never thought about things like this 'sleep deprivation attack' they were talking about. To me, this demonstrates one of the most powerful things about the open source/free software community, the fact that there are smart people thinking in ways others wouldn't. When big companies put a group of smart people together, they may very well come up with a great product but they probably won't be able to think of every attack/feature/etc that a larger interested group could think of.
Another example of this is the development of so-called "side channel attacks" in cryptography. People have used things like battery drain, EMF radiation signatures, and others, to attack smart cards and their ilk. Certainly the designers of the smart cards were assured their crypto was up to snuff but they hadn't counted on these side-channel attacks. If this hadn't been discovered until everybody had a smart card in their wallet, it would be a huge catastrophe.
Open thinking is a bit difficult for most big corporations to do, but I think things like this paper will help bring them around. The time of believing that a small group can design important projects in a closed manor is almost finished, there are too many smart people around thinking in new ways.
I know this is a little offtopic but that sleep deprevation attack got me thinking. Which, I guess, is the point.
Hackers vs. Packet Kiddies (Score:1)
A real hacker tinkers with stuff to see how it works. A real hacker gets his pleasure watching something he hacked up run on an obscure piece of hardware.
Packet kiddies are the children that spend their time on IRC, downloading l33t exploits of the month, running their spl01T scanZ on machines, packeting their "enemies", defacing web sites, and only in very rare cases are these kids able to code anything useful on their own (aside from simple Tcl for their eggdrop botnet).
The hackers examine and "hack up" software and hardware for their own education and pride. Packet kiddies do it so they can get recognition (either among their fellow l33t IRC peers, by telling "hacker stories" at school so people think they're cool, or by trying to do something they hope will get them in the newspaper).
Packet kiddies don't know squat about electronics, and I doubt will ever have a desire to learn about it (it's too hard for most script kiddies, who tend to be pretty lazy/undisciplined). Those that do take the plunge tend to easily be the more mature of both worlds. (There are exceptions, sadly.)
Re:cryptography is questionable (Score:2)
To do quantum computing and quantum communication you have to have perfect control of the system, and prevent absolutely any interactions between it and its environment.
As a theoretical excercise quantum algorithms are certainly fascinating; but I suspect that in reality a quantum computer with enough completely isolated and non-interacting 'gates' to run the factorisation algorithm is unlikely ever to be achieved.
Similarly, quantum communication might work along optical fibres or tightly focussed laser beams, but I think you would have a lot of problems trying to detect the very subtle single-photon correlations using wireless against a noisy RF background.
But I'd be delighted to be proved wrong on either of the above!
OS can be hardened against tampering (Score:2)
That's not obvious at all. For example, what do you (the invader) do if there's no floppy drive? Start pulling chips? What do you do if there's a floppy drive, and there's no password protection in the bios, you can boot from your floppy, but the file system is encrypted? Or any number of other simple obstacles that could be placed in your way.
The point is, it is possible to harden the OS (and by extension the network) against invasion, both by hardware and software means.
Re:Tucson is in the Southwest (Score:1)
-----------
"You can't shake the Devil's hand and say you're only kidding."
Re:Err, what planet are you from?-Back atcha. (Score:1)
Next... a wait-looping program does -not- waste huge amounts of CPU. It wastes no more nor less than that processes share of CPU. Granted, wait-loops are inefficient, and it's better to explicitly relinquish the CPU with some sort of system call. However, this is not the same as 'wasting' 99% of the CPU power.
Which, really, is the only point I'm taking issue with, here. It simply -is not true- that the CPU is 100 times more powerful than what we make use of. If it were, a rival operating system like Linux or BSD or SCO-Unixware or Solaris x86 would 'do it right' and be 100 times more powerful than Windows! That doesn't happen because we are not, even under Windows, wasting 99 percent of our CPU time.
Nor are the CPU meter utilities the only way that I've looked at CPU usage (although I disagree that they are inaccurate measures, but never mind that). I've used WinDbg on Windows and kdb on Unixware to step through various problems I was debugging, done various bits of profiling to test for real-time latencies, etc. We 'waste' at -most- ten percent of the CPU time doing context-switches, page-faults, and other OS tasks. I'm not making this stuff up, you know, there's plenty of literature on OS design that talks about these things. And honestly... don't you think it's a little bit arrogant of you to think you're the only person in the world who has noticed that we could get 100 times more out of our computers if only we 'did it right'? Do you really think OS designers are so blind as to have made that grievous an error?
Re:Just like my garage door (Score:2)
This problem can be solved by storing a separate key for each remote device and having the door opener react to each one. That increases the possiblity of breaking the key, but allows for multiple master remotes. The question is how many keys to store. Currently, electronic devices with remotes can be spoofed by a universal remote, providing us with a master remote, but you can still use the original remote to work the device. Even with the introduction of authorization security, that situation is not likely to change so there is a minimum of two remotes for the device to be "imprinted" to. There may be a need for more. So each device will need to have a max # of remotes it can become imprinted on.
Hmm... So the "resurrected duckling" may need to be "imprinted" to multiple "mothers". Great, another image to digest, polygamist lesbian ducks raising undead ducklings.
-S. Louie
Re:Another great job ... (Score:1)
You could, in theory anyway, use public key cryptography with good assurances... just issue
your public key and your user's pre-generated private key to each new user with the rest of the install software...
If you could do it, actually, it'd be a really good idea for every session to be encrypted. Plaintext passwords over a phone line is one thing... theoretically vulnerable to man-in-the-middle interception, but not likely... over the air is another thing altogether. Anyone could be listening.
Re:How about Earth? :) :) (Score:1)
You support the notion that you can do anything on an 8088 that you can do on a PentiumIII with no loss of performance simply by writing the software more efficiently?
This is the idea that I am contesting. The idea that we are only using 1% of our CPU power.
Granted wordprocessing leaves the machine idle. So does booting it up, setting it to never start a screen saver, and not launching any applications. Or just playing solitaire. In these cases, you're just sitting around waiting for user input and then doing a little bit of drawing.
But I contest the notion that there is a hidden 99% of our power that is not being used because of poor coding practices.
If this were true in the Windows case, wouldn't a Linux that utilizes the CPU fully be 100 times more powerful than Windows? And can you honestly say that Linux -is- 100 times more powerful than Windows? I certainly wouldn't, not in the benchmarking sense, anyway.
Re:OS can be hardened against tampering (Score:1)
An encrypted OS is a cleverer idea, but it has to be implemented carefully. Now that I've broken physically into the lab and copied and/or taken the hard-drive I can take my time cracking the encryption. The initial authentication had better be stronger than 8 letters.
If the machine can be booted up without authentication, I take off the case, replace the CPU with an ICE, and read/write directly to memory to bypass security. If you can't trust the integrity of memory, you're sunk. If it can't be booted up without authentication, well, that's awfully inconvenient. But it is more secure.
Better yet... if you want your machine to be secure from physical attack, secure it physically. 'cause -nothing- is going to stop a DoS attack if the cracker gets physical access. It'll take hardly any amount of explosive at all for that...
reanimating ducklings may work (Score:1)
Considering how many IP addresses IPV6 makes available, devices will probably have permanent IP addresses assigned. Consider also how wired the devices we are talking about will be. When the consumer buys these devices, the routing information can be burned in by the reseller or manufacturer, because said IPs will most probably be stored(and available for transfer to the reseller) in either all devices you own, or your main computer.
So, because there is a trusted IP table, the devices will only listen(recognize) to devices(yours and those you allow) listed on it's burned in table. Imagine the table only being wipeable by the IPs that it trusts. That means that the duckling will not be killed by any other device than the owners(in theory hacking this would require someone to physically take the device from the owner, but that level of trust regarding security is dumb). And therefore cannot be reanimated to recognize any other devices except when the owner of the controlling device deems it necessary.
At least that is what I understood from the article
Re:reanimating ducklings may work (Score:1)
--
The opposition... (Score:2)
And right now, there's a group of rednecks in Alabama with a dozen bearcat scanners trying to intercept wireless communications. They think the Miller Lite they're drinking is going to help them.
Brad Johnson
Advisory Editor
Another great job ... (Score:3)
thanks to all involved
Bain
Hacking is not just software based (Score:1)
Bluetooth and 3GPP (Score:2)
However, most of these devices are rated on just this sort of continual broadcast. Take a look at the specs for recent cell phones. They list total broadcast time, as well as standby time. Bluetooth specs also detail power drain on a broadcast/standby basis.
End result? Manufacturers will get wise to these attacks, and figure out a way to ignore malicious devices. I seem to remember them talking about this, but I don't remember any documents regarding this.
However, this is just one of the issues being addressed in the Bluetooth (pico area nets) and 3GPP (next generation mobile phones) groups. The really big problem is how do you keep others from listening in on your conversation. In both groups, part of the answer is frequency hopping, plus a small amount of encryption (allowed by the Feds). Authentication is already in place to disallow most spoofing. It is always possible to spoof, just depends on how hard you have to work at it.
Re:The opposition... (Score:3)
Actually, there is a group in Alabama who have developed time modulated ultra wideband chips that promise extraordinary wireless bit rates and nearly perfect security. Check out Time Domain [time-domain.com]. In addition to wireless LAN, you can use the stuff for pocket sized radar (see through walls!) and GPS to within centimeters! Anyway, I think it looks cool and haven't yet seen a story about it on /. (I submitted it 10 months ago, though)
High-speed Wireless (Score:2)
-----------
"You can't shake the Devil's hand and say you're only kidding."
All journeys start with a single step... (Score:2)
And
Security Issues. (Score:2)
SL33ZE, MCSD
em: joedipshit@hotmail.com
cryptography is questionable (Score:2)
IPv6 licks this problem. (Score:2)
Neat, huh?
--
Sickly ducklings: software tamper seals. (Score:3)
The concept of resurected Ducklings however might have broader implications. Indeed, it might serve to solve some of the problems with trusted kernel code.
Suppose that we create "sickly ducklings" - processes that will die if interfered with. One way to look at hacking is that hacking is an attempt to obtain unexpected responses from a program based on unexpected inputs, and to take advantage of those responses. A fragile duckling, confronted with unexpected input would die - or perhaps enter a more sickly state.
[Reference to the "DOOM kill process article" elsewhere on slashdot is intentional.]
If the kernel code is fragile, then any attempt to interact with it by unauthorized entities will kill it. The program can then reinitialize itself, with a new identity. Any subsequent reference to this duckling by an authorized user will reveal the tamper.
Obviously the code must be small, and must interact in (formally) defined ways - much like a security kernel.
Combine this with Kerberos style tickets, or better yet Yaksha, and I think this might form the basis of software tamperproof.
[Yes, a well prepared adversary can kill a lot of ducklings to discover an "addicitve duckling medicine" that will enable him/her to cure the duckling, and manipulate the cured duckling. But I suspect the ease of discovering that medicine is related to key/secret size.
Re:cryptography is questionable (Score:1)
Re:Security Issues. (Score:1)
Re:IPv6 licks this problem. (Score:1)
There was an IPv5. Two if I remember correctly (which is part of the problem), but it/they was/were never designed for wide scale use. They solve some problem that most of us will never encounter, but you can find the RFCs for them.
Re:Sickly ducklings: software tamper seals. (Score:2)
Well, I found the link (Score:2)
-----------
"You can't shake the Devil's hand and say you're only kidding."
Interesting! (Score:3)
There are two additional thoughts I would like to share. First - alot of this should be considered today. Examples include wake-on-LAN and power-management systems as well as laptops. For the first, assume a company has several hundred workstations that use wake-on-lan technology or other power management (maybe wake on modem activity?). Alot of power is consumed while those devices are "awake", so it would seem logical to put them to sleep when not in use (to save money on power). Somebody could simply walk up to a station and start sending out rogue "Wake p!" packets across the network, wasting large amounts of electricity and costing the company hundreds of dollars each day. This is, of course, theoretical.. but it underscores what these guys are talking about - conventional security wisdom isn't applicable in all situations.
I like the message. It's a wake up (pardon the pun) call for security analysts - consider your requirements! Locking everything down military-style does little good if an attacker can just start turning devices off at will by draining away all their power!
--
How about Earth? :) :) (Score:2)
The trick is to use software paging. If you can spare enough memory to hold paging software and a software register, and your bus can transmit that data to a card, there is NOTHING to stop you having an unlimited amount of memory in your computer.
An 8088, expanded this way, could easily handle over one million pages, each 1 megabyte in size, totalling 1 terabyte of RAM.
An 8088 could program the 20-bit address bus, giving it a total of 1 megabyte of addressable RAM. However, the addressable space, internal to the processor, was the full 32-bits. If you had a TSR, which read this value and programmed a card with it, you could bypass the limitations of the rather idiotic address bus design.
CPU "Protected Mode"? Same rules apply. Write something in software to produce a similar effect. Yes, you add a layer, but it's not going to slow you -that- much, as it doesn't have to -do- much.
I agree that modern =LINUX= software utilises the processor a lot more than 1%. At least, when I use it, it does! I'm often getting between 98%-102%, as shown by 'top'. On the other hand, I do a lot of processor-intensive stuff. Wordprocessing leaves the machine unbelievably idle, and even regular stuff that floods the cache can end up injecting 4-5 wait-states for every machine-level instruction executed.
Encryption... and bandwidth. (Score:1)
I suppose they expect to only worry about data between each hop, as then you have to guess where the new hop takes it (unless of course you KNOW where it's going by previous watching).
At 2.4 Ghz, it's interesting to note that there isn't exactly a lot of channels available up there as the bandwidths get larger. You get about 60-80 distinct channels (varies from country to country) at 1Mbit, less (not sure of the exact amount) at 2Mbit, and a grand total of 3 at 11Mbit. (This might look weird.. 80/11=7.something, and 60/11=5.something. However at higher bandwidths, you get larger amounts of edge bleeding, which is why there is only 3 channels, and in the interest of international conformance, they have reduced the channel usage to fit the wider market).
At 11Mbit, if one of these channels is occupied then you have only 2 alternatives. If they're all blocked, well, you're going to have a real problem aren't you?
It isn't hard to build devices to jam the entire bandeither. What can be more devastating however, is the power they put out, which when received by an antenna might be of enough strength to actually fry circuitry. Many microwave ovens generate frequencies around the 2.4 Ghz band, some through direct emissions at 2.4 Ghz, but most through harmonics. At between 600-700 Watts total, the ordinary microwave distrbutes a lot more power at 2.4 Ghz than the puny 500mW or 100mW that most countries allow (fortunately microwave ovens are shielded and most of this doesn't escape, especially around us humans).
Denial of service can take many forms, and current radio networks can be easily disrupted through signal damage, power drain, or signal jamming. The problem is however, getting everyone around the world to agree to a standard and spectrum that will allow large data bandwidth without country specific problems, and allowing lots of channels, with ideally a lowish frequency band to allow longer distance communications. The problem is of course, they're already taken.
Re:cryptography is questionable (Score:2)
Re:Interesting! (Score:1)
Re:Err, what planet are you from?-Back atcha. (Score:1)
Re:Err, what planet are you from?-Back atcha. (Score:1)
720K/1Meg... it isn't really a relevant difference. The point is that for the sorts of large calculations used in cryptography, 3d rendering, etc,
Second: The level of the code you write, and your ability as a code writer indicate what you can make your machine do. You are wrong about what machines today utilize on their chip capability.
There are utilities for both windows and linux that show your percentage CPU usage. 2 or 3 percent is typical for an idle system, 90-something percent for an intensive videogame, maybe 50% for streaming video - bandwidth is usually the constraining factory here. Exact percentages vary from machine to machine, obviously... a PentiumI with a direct T1 connection is going to have different constraints than a Quad-Athlon machine with a 33.6 modem.
At any rate, I regularly work with low-level code - driver/kernel level code, and standalone code - and I'm quite aware of what it takes to saturate a CPU.
First- Ever try win3.1 on a pentium3? Wanna
know why it runs better? The code can be excecuted better thru the CPU because of the CPUs capability.
Exactly. It is because the CPU was saturated, fully utilized, unable to perform any better, that a more powerful CPU allows the system to perform better. If, as you suggest, CPUs were massively under-utilized, then it wouldn't matter whether or not you had a more powerful CPU. To draw an analogy...
If I'm trying to drive to work on a 65 mph speed limit highway, if I'm driving a Mustang, I'm underutilizing the car, and upgrading to a Ferrari doesn't get me there any faster. (Unless I break the law. Ok, so it's a weak analogy).
If, OTOH, I'm driving a Model-T with a top speed of 45 mph... I'm fully utilized. Upgrading to a Mustang or a Ferrari -will- show improvement in my accomplishing the task.
What you're saying is equivalent to saying 'because upgrading from a Model-T to a Ferrari lets you go faster, this proves that we don't drive our Model-T's as fast as we could. If we wanted, we could drive them as fast as Ferraris!'
It just doesn't make any sense.
-You can even use a weak HD and it runs better. If youve ever studied CPU architecture and how to program registers, you know this.
Sure. Hard drive has nothing to do with actual execution unless you start swapping. I don't see how this supports your point.
Second-this is the same way you can protect your machine against tampering. -...But as any good security expert, I will leave that a topic for another day.
I'm sorry, come again? Because hard drive speed is not a limiting factor on program execution time, we can secure our machines... how?
Third....WAY wrong with what is used today. In plain English, you are utilizing a 32 bit bus to process a 32 bit instruction that doesn't need to be 32 bit in length. it could be done in 8. Its called bloatware. Microsofts famous trademark. For proof, see above about registers.
Uhmmmm... no. Between caching, pipelining, branch prediction, and all that, this just isn't how it works. I'm no CPU architecture expert, but this violates even the basic principles. First of all, I believe the instructions themselves are still (mostly) 8bits. So a single bus fetch of 32 bits could fetch up to four instructions. Or an instruction and three bytes of arguments. Or whatever. Granted, I haven't studied the pentium archicture that thoroughly, but... when we went from 8->16 we did not go from 8 bit instructions to 16 bit instructions, nor did the 386 have 32 bit instructions. You will recall, please, that 8088 binary code runs on a pentium unchanged. Instructions are still 8 bits. More instructions are fetched per bus cycle with a larger CPU bus, not more memory wasted per instruction.
Re:IPv5 & ST-II (Score:1)
RSVP is an attempt to replicate this stateful bandwidth control model without having to modify the underlying IP protocols. It has many of the same problems, however, with maintaining a distributed state. RSVP did learn a number of lessons from ST-II, and can deal with partial failures (where some of the intervening hops lose their bandwidth information) much more cleanly. Still, RSVP is considered a pretty havyweight mechanism.
Differentiated Services is yet another Quality of Service effort at the IETF. DS takes the opposite tack. There is no global bandwidth reservation, everything is resolved hop by hop. That is, each network link in the path makes its best effort to meet the QoS defined for that packet. There are no guarantees, but it works well in practice. Its just like the IP protocol itself: there are no guarantees, but the Internet works pretty well in practice.