Cryptography Mailing List 2015

 

The Crypto Pi

This year has had some devastating news about the state of the internet infrastructure in store and came with some disillusion about the vulnerability of tools we use day by day. And BTW it had turned out, that Johnny still can't encrypt.

To change that, I've put some effort into developing the crypto pi.

While running on a minimalist Linux OS with only the necessary tools installed, the crypto pi is different both from internet servers and insecure endpoints, and helps Johnny to establish secure communication that is always encrypted, without burdening him with complex tasks that would only make him avoid using secure communication.

The crypto pi is in its early stages of its infant life and although it's working and looking great, it has not got the most important ingredient for a happy life (yet), peer-review.

So I'd like to ask all of you, who think a well-designed, isolated crypto box under the sole control of its user, capable of doing message encryption reliably, may improve the situation we're facing today, to give a hand and scrutinize the design and implementation of the crypto pi. Let's make the crypto pi a success in 2015, together.

The concept of secure communication using the crypto pi relies on several assumptions, not everyone will agree to:

  1. Johnny will be able to communicate securely with people he knows, if he had been able to exchange an initial secret information (on a piece of paper, via telephone, or some other way)
  2. Johnny's endpoint device is not trustworthy, as it runs all kinds of complex programs that are prone to attack the secrets on his device without notice in unforeseeable ways.
  3. Apart from feeding the crypto pi with the initial secret and an email address of the recipient (by filling out a form) Johnny has nothing to do with key management, but will be able to verify that message encryption has been performed.
  4. All secrets are stored on the crypto pi and messages leave the crypto pi AES-encrypted with a strong randomly generated key. The crypto pi does not use public key cryptography, there is no PKI nor CAs involved.
  5. Johnny uses one single secret that he alone knows to establish an encrypted tunnel to the crypto pi over which he interacts with the web server on the crypto pi to read and write messages.
  6. The local network in which the crypto pi works is not trustworthy, so all information that originates from the crypto pi is encrypted and only encrypted information that enters the crypto pi will be processed inside.
  7. Although desirable, ensuring anonymity is not a pre-requisite of the crypto pi's design (at the moment).
  8. All source code is licensed under GPL.

Fortunately, the crypto pi has a home (crypto-pi.com) where you can get more detailed information about its fundamental concepts and implementation. Make sure, your criticism and constructive suggestions will be used to improve this project.

Best wishes for 2015

Ralf Senderek


The Crypto Pi - Discussion

On Fri, 2 Jan 2015 raindog wrote:

> The R Pi runs a mass of closed-source, proprietary firmware that can never be audited.
> You're loading this firmware into the kernel and trusting it does
> what it says it does. For some applications you don't care about that, but for this...
>
> It's one of the major drawbacks of the R Pi. I would love to see something like this
> that runs on hardware that is more open (such as the BeagleBone).

I don't know how much more auditable the beagle bone will be compared to the Raspberry Pi. To be honest, we don't have a platform that is completely auditable and probably won't get one in the near future. So we need to use something we have, even if it is not ideal.

> Complete personal opinion: I'd prefer something that runs on the best audited/most
> secure/most crypto-friendly, namely OpenBSD. However, that is just my opinion

Of course the core Crypto Pi software is not limited to Linux, it probably will run out of the box with a minimalist OpenBSD, as long as the OS provides bash, python and the essential tools (/usr/bin/gpg, hostapd, apache, postfix). If there was a mini or better micro version of OpenBSD running on the Raspberry, I'd love to use this as the foundation for the Crypto Pi.

> Some other thoughts:
>
> 1. Is your choice of symmetric shared-secret technology based on Johnny's inability
> to understand PKI?

Actually there are two reasons. First public key cryptography is not necessary for secure message exchange, as I assume Johnny had been able to hand over a random secret to his correspondent at a meeting or using some other suitable way. Secondly, using a series of symmetric keys makes recovery from a compromise much easier and reliable compared to the necessary revocation of public keys when private keys have been compromised.

> 2. Unless I'm missing something, if ${EVIL_SPIES} steal the R Pi, they have access to
> everything Johnny has sent,

No, keys used to encrypt past messages are destroyed already as well as the messages.

> can imitate Johnny, etc. Or even if Johnny's backpack gets
> stolen, whoever picks it up will be able to do these things.

Yes, if they can steal the Crypto Pi with the medium on which the secrets are stored, they can. If Johnny uses model B his secrets reside on a USB memory key that has to be removed while the Crypto Pi is not in use. On the model A+ secrets are stored on the SD card, which has to be removed like the memory stick.

> 3. ${EVIL_SPIES} might also grab Johnny at a cafe while his device is still powered on
> and unencrypted. They'll bring their own UPS power and splice into it so it
> never powers off, so this should have some sort of regular re-encrypt/unmount/etc,
> ideally loaded in the kernel or something that can't just be disabled by someone
> killing a process.

The Crypto Pi's threat model won't protect against raiding Johnny's device while he uses it. At gunpoint, it won't be of any use if the secrets on the Crypto Pi are encrypted, because a bit of rubber hose cryptography would suffice to get the secrets. What someone can do then, is to act like Johnny, but Johnny could send his correspondents a message, telling them that he's been "hacked", asking his correspondents to reset his key on their devices and the stolen Crypto Pi will not be of any use then, except for the limited his correspondents get his notice and act accordingly.


The Crypto Pi - Discussion

On Fri, 2 Jan 2015 Jonathan Thornburg wrote

> For a crypto appliance where security is paramount, would it
> be appropriate to choose as your foundation an OS with a more
> "secure by default" philosophy than modern Linux has adopted?
>
> For example, should ASLR be turned on by default for all processes?
> Should malloc() aggressively randomize the addresses it returns?
> Should free() "poison" freeded blocks of memory in its data structures
> so that a later duplicate-free, or a later reuse without re-malloc-ing,
> causes a process-abort? Should malloc() try to proactively check for
> buffer overruns? Should the system gcc emit code which tries to
> catch buffer overruns? If the Crypto Pi will be configured with
> swap space, should swap-space encryption be turned on by default
> to ensure that sensitive keying material never hits the disk?
> Should trying to dereference a NULL pointer be be a process-abort
> (to expose the failure) rather than a soft failure?
> Should system daemons be priviliged-separated, so that bugs which
> do occur, are less likely to be exploitable?
>
> To me, these criteria suggest something like OpenBSD as your OS,
> rather than Linux:
> http://www.openbsd.org/papers/ru13-deraadt/
> http://homepages.laas.fr/matthieu/talks/openbsd-h2k9.pdf
> http://www.openbsd.org/papers/eurobsdcon2009/otto-malloc.pdf

Thank you, for the links, If there was a mini or better micro version of OpenBSD running on the Raspberry, I'd love to use this as the foundation for the Crypto Pi. The criteria you mentioned would be very useful for a single binary that would run on the user's endpoint device where the attack surface is very large. The Crypto Pi in itself is designed as an isolated entity that is accessible only through a single encrypted tunnel, like a VPN, so the attack surface is reduced. That doesn't mean that your suggestions should not be followed as much as possible inside the Crypto Pi.


The Crypto Pi - Discussion

OnWed Jan 14 04:41:15 EST 2015 John Gilmore wrote:

> [quoting Bakul Shah]
>
> > On linux/pi the h/w RNG is available under /dev/hwrng.
> > /dev/random is their standard "cryptographic pseudo RNG". It
> > seems to be extremely slow (80 bits/second) or broken or has
> > the wrong default settings.
> >
> > FreeBSD/pi /dev/random is much much faster @ 33Mibis/sec. AFAIK
> > it doesn't use the h/w RNG.
>
> The problem on Linux is probably that they failed to run rngd, the
> hardware randomness daemon. (In Debian/Ubuntu it's in the "rng-tools"
> package.)
>
> The problem on FreeBSD is that /dev/random doesn't wait for entropy,
> it just makes pseudorandomness as fast as it can.

The Crypto Pi needs a random key with at least 128 bits of entropy for every message (AES). The desirable hardware platform would be the beagle bone and the OS OpenBSD to make auditing possible.

But there is a problem with the randomness source on the beagle bone. I've monitored the state of the kernel's entropy pool via /proc and found that if you read 10 Bytes from /dev/random the entropy level drops by 52 bits. A short time later reading another 10 Bytes the beagle blocks for 54 seconds. Reading 20 bytes for the first time removes 116 bit of entropy from the pool and the second read blocks for nearly 70 seconds. The beagle bone needs 143 seconds to recover and to add a 100 bits of entropy back to the pool. There's no rngd running.

On the Raspberry Pi, reading 10 Bytes drains 180 to 240 bits from the pool, and reading 100 Bytes drains 960 bits of entropy, but the RPI recovers rather quickly (with or without the rngd running) at a speed of 100 bits per second.

OpenBSD will only be available on the beagle bone, but the questionable random source on the beagle might justify the choice of the RPI as the hardware platform for the Crypto Pi.


The Crypto Bone's Threat Model

Ray Dillinger and Jerry Leichter have made suggestions on how to reset an electronic device by the legitimate user. Excuse my tendency towards applying these suggestions to the Crypto Bone and its available features within its threat model.

A user wants to "reset" his working Crypto Bone, he does not wish to get a new, empty one, but for some reason he insists that the vpn connection should use a new key and the message key database should be encrypted with a new masterkey and only he should have these two new keys on a removable usb medium in his own hands.

Bear's suggestion (with more general demands on hardware design):

> Essentially this amounts to separating all firmware updating
> from all normal operation at as low a level as possible, while
> providing a "hardware reset" capability to restore a known
> firmware configuration.
>
> 1) The Device has in ROM (ie, not rewritable AT ALL) the
> required code to disable the device's external connections,
> then "flash" all the rewritable firmware in the device
> back to a known configuration. Any updates to firmware
> beyond the default configuration, must happen again
> after any time this code is run. Absolutely no on-
> device memory image anywhere, save only the non-rewritable
> ROM itself, survives the autoflash.
>
> 2) The "autoflash" action described above happens when and only
> when a physical switch on/in the device is triggered by a
> human. It cannot be triggered nor prevented in any way
> from software, including firmware.
>
> I'd suggest a switch with a good mollyguard; maybe a
> tiny button located behind a 12mm-deep 1mm-wide hole, such
> that someone would have to stick a wire into the socket
> to reset it or something. Or maybe a DIP switch mounted
> behind a snap-out panel. Up to you, just don't leave it
> hanging out where someone could hit it accidentally.
>
> 3) When in its "autoflashed" mode it can copy new firmware
> images from the mSD card into its own RAM, but does not
> actually install any of it, on any device, at all,
> until....
>
> 4) When the operator flips the switch back to "normal operation"
> mode, a different chunk of the ROM code runs. It flashes
> the new images to all on-device firmware, disables all
> firmware updating, and THEN re-enables network communications.
> At no time until it's finished running does it call any of
> the firmware it is installing or has installed.

1) is equivalent to inserting a new mSD card, the database is lost, new keys are generated, this is not what the user wants. A reset should preserve the message keys. If need be, he could reset a single message key in the usual way.

2) can be done by requesting the user (legitimate or not) to perform some identifiable action on the beagle bone's gpio pins, like morsing SOS on a pin by short-circuit pin x with pin y. Ther's no need for a switch, if you don't insist that the action "cannot be triggered" by software. Software surely will be necessary for that.

3) is no problem, as the new keys can be written to the ramdisk, internally without replacing the working keys.

4) This is the crucial part that activates the new keys. And here the switch alone does not suffice. I continue after Jerry's suggestion on that part.

Jerry Leichter's suggestion:

> I'd suggest the following:
>
> 1. The modifiable portion of the device contains a signature key based on a
> published algorithm.
>
> 2. To change the key, not only do you have to enable the mechanical
> interlock, but you must provide both the old key and the new key.
> (Or perhaps sign the update with the old key.
> It would seem these are equivalent, though I'd want to think about it
> some more.)
>
> 3. All devices are shipped with the same published, initial key.
>
> 4. If you lose the key, there's no way to ever update the device again
> - though it will continue to operate in its current configuration
> indefinitely.
> (Allowing a "reset to factory state" would allow an attacker to load
> arbitrary code, which we obviously want to prevent.
> Note that you can deliberately turn the device into a ROM by
> changing the key to a new randomly-chosen one and promptly
> discarding the new key.)

1) At the moment, there is no signing key in the Crypto Bone at all. It generates the vpnkey and the masterkey on its own by reading from /dev/random. And it destroys the masterkey as it must be provided by the user from outside the Crypto Bone through the secure link.

2) Once the mechanical interlock is triggered, the user must provide the old masterkey to prove he is a legitimate user. Note, the threat model assumes physical access to the CB, so an attacker can morse SOS on the gpio pins but without the masterkey a reset would not be initiated. (Separation of masterkey from message key database). But we're talking about Johnny, who can't encrypt, so the new key must be generated inside the CB as a result of running the reset-code.

3) N/A

4) If Johnny loses the masterkey, no reset will be possible, the CB will be stuck after boot in the loop waiting on the masterkey nobody can provide, access to the message key database is lost. Time to insert a new mSD card image and be more careful next time.

So, what can go wrong?

If the reset-code on the Crypto Bone is unmodified (as other crucial parts of OpenBSD) the legitimate user can trigger a reset by performing the action on the board. An attacker without knowing the masterkey cannot. And the masterkey is nowhere to be found on a powered-off Crypto Bone.

When keys get replaced and the database re-enrcypted the transfer of these secrets to the legitimate user's removable medium must take place in the same manner as if it were a fresh mSD card running for the first time. If Johnny forgets to remove the mSD card and does not transfer the secrets to his usb stick, then leaving the SD in the Crypto Bone will destroy the masterkey rendering the system unusable. And that is exactly what it should do.


How to solve the hen-and-egg problem

It is usually being believed that the hen-and-egg problem cannot be solved, but I need a solution for my particular version now.

The Crypto Bone has reached a state that it actually can be used in practice, but it hasn't been subject to thorough code review, yet. A code review has to focus on two programs (secrets and openpgp) that I have written in C, which make use of Peter Gutmann's cryptlib. And in a wider context a handful of ksh scripts.

People I've asked for advice, how to get code review, seem to say that unless the Crypto Bone isn't used widely, nobody will have any reason to look at the source code. In order to increase the user base considerably, I've developed a software-only version of the Crypto Bone that people can use who have no Beagle Bone.

One might think that this all-in-one version will be more vulnerable to attack as it lacks the isolated, minimalist environment that the real thing provides. And in a way that's true. But it turns out, that it is still pretty safe unless an attacker has got the ability to run arbitrary code as root on the computer that hosts the all-in-on version. And although this is more likely to happen on the user's machine than on the bone, it means the all-in-one Crypto Bone may be safe to use, too.

I'd like to know if you think this is the right way to approach the usability dilemma or if you have any idea how to approach my hen-and-egg problem from a different angle.


Usable Security Based On Sufficient Endpoint-Specific Unpredictability

I am about to secure secret information using a "password" that can be produced by a process on the user's endpoint device with administrative *execute privilege* instead of using some information from the user's brain. The idea is that a malware running on the user's behalf would not be able to produce this password unless it has execute permission as the root user.

Malware that has got read access as the root user could read any file or information on the endpoint device, but as the production of the password requires execute permission, all secret information secured with this password is still safe until execute permission is gained.

Everything short of running code as root should not compromise the protected information. If such a secret-producing process existed it would be a substitution for user provided passwords and would increase the usability of crypto considerably.

I'd like to hammer out an idea for such a process.

It's clear to me that such a process wouldn't work on all devices let alone all operating systems. Unfortunately it wouldn't work on the Crypto Bone, because people would restore their system from a common image file. In any case where systems are being cloned, this is not an option, but on the user's individual OS installation it should work.

In all cases in which the OS provides a properly secured admin account, and in which the installation of the OS has produced a sufficient amount of unpredictability, pieces of information can be collected by such a process that are unavailable even to malware that has read access to the device as root.

I presume that root read access is a realistic threat on a user's endpoint device due to the exploitation of the complexities of modern OSes by network attackers without physical access to the device (heartbleed). And I presume that getting code run as root is considerably more difficult than getting read access, or in other words, in this case nothing can be done to save the information stored on the device at all.

So, where will we find the necessary unpredictability to construct a secret that is inaccessible to anyone not being able to run code as root?

I exclude everything that's stored in a file and anything that is prone to change. But to the best of my knowledge, the inode numbers of specific files cannot be read directly by a process with read access only. These inode numbers are far from being random but they contain enough unpredictability to construct a password that is as secure as anything a user could provide by picking her brain in the traditional way.

My Fedora informs me that the inode numbers of files like /root/.rnd that live inside a directory with 700 permission differ from the parent directory's inode number quite a bit. One should not over-estimate the unpredictability stored in the inode number as they are allocated sequentially to new files, but conservatively guessing there will be at least 5 bits of unpredictability in each of them.

So imagine in a certain OS there are say 5 such files (A-E). There are 5! permutations of their inode numbers in concatenation that each would make up a weak password if the sequence was known in advance. But if a sixth inode number, unknown to the attacker, defines the sequence that's actually used the resulting string piped through a bcrypt hash function should be secure enough to replace a user provided password.

In fact some *NIXes provide very few such files, so it might be a better idea to generate a directory structure for this purpose by creating a random number of files to eat up inode numbers and then pick files you need and delete all the temporary files again. This way the unpredictability of the left-over files can be increased way above 5 bits.

For the process that deterministically computes the local secret the unpredictability of the result is clearly zero, but for any network attacker it should be sufficiently high to reliably protect secret information bound to using that specific device by an ordinary user.


The Moral Character of Cryptographic Work

Phillip Rogaway's essay introduced here by Perry Metzger aims at changing the mindset of current cryptography research (and practice).

I'd like to emphasize only three of the many excellent ideas from this essay:

First, the moral dimension of cryptography is not an accidental appendage, it is a fundamental part, because cryptography can be used to empower people or to take their freedom away. Phillip convincingly shows why mass-surveillance is dangerous for society and its negative social effects on people will lead to regression.

"[...] our inability to effectively address mass surveillance constitutes a failure of our field." (Abstract)

Secondly, to live up to the moral obligations of cryptography we need a realistic threat model and act upon this threat model efficiently, not abstract.

"At this point, I think we would do well to put ourselves in the mindset of a *real* adversary, not a notional one: the well-funded intelligence agency, the profit-obsessed multinational, the drug cartel. You have an enormous budget. You control lots of infrastructure. You have teams of attorneys more than willing to interpret the law creatively. You have a huge portfolio of zero-days. You have a mountain of self-righteous conviction. Your aim is to *Collect it All, Exploit it All, Know it All*. What would frustrate you? What problems do you *not* want a bunch of super-smart academics to solve?" (p. 41)

Good question.

Is the answer really "How to repair the internet, how to fix protocol issues?" Is it "How to restore trust in (to this point untrustworthy) online services?"

I don't think so, because people's dependence on a technical infrastructure they don't understand nor control themselves is the building block of the insecurity we face today. We shouldn't underestimate the frustration potential of a development that would restore (or even start to enable) user's control over their digital lives.

Phillip Rogaway calls this "A cryptographic commons".

"We need to erect a much expanded commons on the Internet. We need to realize popular services in a secure, distributed, and decentralized way, powered by free software and free/open hardware. We need to build systems beyond the reach of super-sized companies and spy agencies. Such services must be based on strong cryptography. Emphasizing that prerequisite, wee need to expand our *cryptographic commons*. (p. 41)

Popular services reflect real needs, something that has value for people who want to improve their lives in their communities, it's not an abbreviation for mindless entertainment. Phillip's own primary example is *secure messaging*, the need to be able to communicate without fear. Any solution would include a decentralized component, whose reliability and trustworthiness is of paramount importance to prevent it from becoming the next failure.

Wouldn't it be prudent to direct much more effort into developing, testing and promoting such a crypto server under the user's own control?

"But for cryptography, much is lost when we become so inward-looking that almost nobody is working on problems we *could* help with that address some basic human need. Crypto-for-crypto starves crypto-for-privacy, leaving a hole, both technical and ethical, in what we collectively do." (p. 24)