Thursday 29 August 2013

NIST Cybersecurity Framework

The purpose of this document is to define the overall Framework and provide guidance on its usage. The primary audiences for the document and intended users of the Framework are critical infrastructure owners and operators and their partners. However, it is expected that many organizations facing cybersecurity challenges may benefit from adopting the Framework. The Framework is being designed to be relevant for organizations of nearly every size and composition. It is also expected that many organizations that already are productively and successfully using appropriate cybersecurity standards, guidelines, and practices – including those who contributed suggestions for inclusion in this document – will continue to benefit by using those tools.

DRAFT - Framework Core
The Framework Core offers a way to take a high-level, overarching view of an organization’s management of cybersecurity risk by focusing on key functions of an organization’s approach to this security. These are then broken down further into categories. The Framework’s core structure consists of:
  • Five major cybersecurity functions and their categories and subcategories
  • Three Framework Implementation Levels associated with an organization’s cybersecurity functions and how well that organization implements the framework.

DRAFT - CompendiumThe Framework’s core also includes the compendium of informative references, existing standards, guidelines, and practices to assist with specific implementation.

The compendium of informative references that included standards, guidelines and best practices is provided as an initial data set to map specifics to sub-categories, categories and functions. The Framework’s compendium points to many standards – including performance and process-based standards. These are intended to be illustrative and to assist organizations in identifying and selecting standards for their own use and for use to map into the core Framework. The compendium also offers practices and guidelines, including practical implementation guides.

Monday 15 July 2013

Biometric Specifications for Personal Identity Verification - NIST 800-76-2

This document contains technical specifications for biometric data mandated or allowed in [FIPS]. These specifications reflect the design goals of interoperability, performance and security of the PIV Card and PIV processes. 

This specification addresses iris, face and fingerprint image acquisition to variously support background checks, fingerprint template creation, retention, and authentication. These goals are addressed by normatively citing and mandating conformance tobiometric standards and by enumerating requirements where the standards include options and branches. In such cases, a biometric profile can be used to declare what content is required and what is 
optional. This document goes further by constraining implementers' interpretation of the standards. Such restrictions are designed to ease implementation, assure conformity, facilitate interoperability, and ensure performance, in a manner tailored for PIV applications.

The biometric data specifications herein are mandatory for biometric data carried in the PIV Data Model (Appendix A 
of [800-73, Part 1]). Biometric data used outside the PIV Data Model is not within the scope of this standard. 

This document does however specify that most biometric data in the PIV Data Model shall be embedded in the CommonBiometric Exchange Formats Framework [CBEFF] structure of Section 9. 

This supports record integrity (using digital signatures) and multimodal encapsulation. 

This document provides an overview of the strategy that can be used for testing conformance to the standard. It is not meant to be a comprehensive set of test requirements that can be used for certification or demonstration of compliance to the specifications in this document. NIST Special Publications 800-85A and 800-85-B [800-85] implements those objectives.

http://csrc.nist.gov/publications/nistpubs/800-76-2/sp800_76_2.pdf

Tuesday 14 May 2013

Homomorphic Encryption - Cloud Computing compatible with Privacy


Suppose that you want to delegate the ability to process your data, without giving away access to it. Craig Gentry at the IBM T.J. Watson Research Center shows that this separation is possible.

He describes a fully homomorphic encryption scheme that keeps data private, but that allows a worker that does not have the secret decryption key to compute any (still encrypted) result of the data, even when the function of the data is very complex. 

In short, a third party can perform complicated processing of data without
being able to see it. Among other things, this helps make cloud computing compatible with privacy.

http://crypto.stanford.edu/craig/easy-fhe.pdf

Saturday 13 April 2013

Certificate Transparency - Step forward to protect both domain owners and end-users


Introduction

The goal is to make it impossible (or at least very difficult) for a Certificate Authority to issue a certificate for a domain without it being visible to the owner of that domain. A secondary goal is to protect users as much as possible from mis-issued certificates.

It is also intended that the solution should be backwards compatible with existing browsers and other clients.

This is achieved by creating a number of cryptographically assured, publicly auditable, append-only logs of certificates. Every certificate will be accompanied by a signature from one or more logs asserting that the certificate has been included in those logs. Browsers, auditors and monitors
will collaborate to ensure that the log is honest. Domain owners and other interested parties will monitor the log for mis-issued certificates.

The logs will not deal with revocation: that will be accomplished by existing mechanisms.

The Log

Each log is an append-only log of all certificates submitted to it. It is designed so that monitors can efficiently ensure that any certificate logged is promptly visible to them and can be checked for legitimacy (for example, by knowing which certificates the domain owner has got from CAs). It is
also possible for auditors to efficiently check whatever partial information they have about the log is consistent with the append-only nature of the log.

In other words, monitors see the whole of the log, and watch over it on behalf of domain owners and other interested parties. Auditors gather partial information and then verify that all that partial information is consistent with the current state of the log. Inconsistencies indicate dishonesty on the part of the log. For example, an auditor built into a browser would verify that the certificate for each website the browser visited actually appears in the log.

In general, “consistent with the current state of the log” means that the current log provably contains every certificate ever signed by the log.

If the log ever attempts to claim that it has logged a certificate which is not actually in the log, then this will become apparent to auditors and monitors.

Thus, domain owners are assured that only their own legitimate certificates are in circulation, and can take action when certificates are mis-issued. This ultimately protects users as well as domain owners by effectively preventing masquerading as websites that are monitoring the logs.

Detailed Operation


Anyone can submit a certificate and its validation chain to the log. The log will immediately return a signed data structure known as a Signed Certificate Timestamp (SCT) containing

. The time the certificate was submitted.
. A signature over the certificate and timestamp.

The SCT is served along with the certificate each time a TLS session is initiated, either through a TLS extension or through incorporation into the certificate itself. Clients will decline to connect to servers that do not include a SCT from a trusted log. Clients will also later check that the certificate has been correctly incorporated into the log (see below).

The log promises to incorporate the certificate and chain within a certain amount of time1. Failure to do so is considered a breach of contract by the log. This time is known as the Maximum Merge Delay (MMD)2. We anticipate the MMD being measured in hours. Clearly, the MMD is the longest possible time a rogue certificate can be used without detection.

The log itself consists of an ever-growing Merkle tree of all certificates ever issued. As we show in the detailed protocol document[ref] it is possible for anyone with a complete copy of the tree to efficiently show that any two versions of the tree are consistent: that is, the later version includes
everything in the earlier version, in the same order, and all new entries come after all entries from the earlier version3. The size of this proof is logarithmically proportional to the number of entries.

As frequently as possible, but at least as often as the MMD, the log will produce a new version of the tree and will sign the following data, known as a Signed Tree Head (STH)

. The root hash of the Merkle tree.
. The number of entries in the tree.
. A timestamp.
In the unlikely event there are no new entries by the time MMD has expired, the log will reissue the STH with a new timestamp.

Monitors will be able to fetch this new version and a copy of all new certificates in the tree.

Since they will have the previous version, they can check for themselves that the two trees are consistent, simply by constructing the consistency proof themselves. Any discrepancy will be a breach of contract by the log. Furthermore, this discrepancy will be provable - the monitor will be able to show the two signed versions of the tree, and that they are not consistent. Because they are both signed by the log, the inconsistent version can only have been produced by the log misbehaving.

It is also possible to show the log has failed to honour the MMD by showing an SCT and an STH whose timestamps differ by more than the MMD, and that the corresponding certificate is not present in the tree4. The log must always produce an STH that is more recent than the MMD on request; failure to do so is an indication of misbehaviour.

Auditors will also be able to request from either the log or a monitor (or anyone with a copy of the log) a proof that any particular SCT is consistent with any particular STH, so long as the STH was issued after the MMD had passed since the SCT’s timestamp5. This proof consists of a Merkle
path from the leaf hash corresponding to the SCT up to the root STH.

However, this will, of course, reveal the particular certificate that was queried and so we must also provide a privacy preserving mechanism for verifying SCTs. We provide two mechanisms.

The first makes the various proofs available via DNS. To get a sense of how this works, say you wanted to see the proof that a certificate with SCT hash 89abcdef is in the current tree (which contains, say, 1300000 entries). To find the location of the SCT in the tree, you would request a
TXT record containing its index from

. 89abcdef.hash.<domain>
Say the returned index is 1234567. The Merkle path you want then contains the hash of certificate 1234568, the hash of 1234565 + 1234566, and so on. To get these values, you would request a TXT record containing the hash from

. 1234567.0.1300000.tree.<domain>
. 617283.1.1300000.tree.<domain> (617283 = 1234566 / 2).
and so on. In general, you request <entry number>.<level>.<tree size>.tree.<domain> to get the hash at that position in the particular tree you are interested in6.

This method does not hide which certificate is being checked, but it does hide who is checking it from the log: most clients are configured to use their ISP’s nameservers (or some other caching resolver), so all the log will see is that some client of a particular ISP is interested in a particular
certificate. The ISP will, of course, know which client, but they also already have access to the IP addresses that client visited.

The second method is for the auditor to request a range of certificates around the one of interest and check all of them, including the one they want to check. In order to do this, the log must also make available a range of timestamps for each chunk of log. That is, for every, say, 256
certificates the log will say what the lowest and highest timestamp in the chunk is. Note that there may be overlap between chunks. The client can thus choose a set of chunks that should include the certificate it has, since it knows the timestamp from the SCT, and fetch all the corresponding
hashes and their proofs. All the log learns is that the client wants to verify one of the certificates it fetched.

Finally, since the proofs can be generated by anyone with a copy of the log, clients can also choose to either keep their own copy, or verify via some trusted third party that keeps a copy (note that because of the signatures on the STH and SCT this third party only need be trusted to preserve privacy, not to be honest about the log contents - the client still verifies everything itself, but because it trusts the third party to preserve its privacy it no longer needs to request “dummy” entries to hide the sites it has visited).


Gossip


All the above allows any particular client to check that their view of the world is consistent with their past views, but the final piece in the puzzle is to show that everyone’s views are consistent with each other.

This can be achieved by exchanging STHs and SCTs between different clients through gossip protocols. These can then be checked for consistency using the methods described above. Clients wishing to preserve privacy can verify their own SCTs against STHs fetched from the log or its mirrors and only gossip the latest STH they have seen.

Security Considerations

A misissued certificate can be used without detection for at most the MMD. Once the MMD has passed since the SCT was issued, either the certificate appears in a public log, or the log issuing the SCT is no longer trusted, since it has failed in its duty to include the certificate in the log within
the MMD. In the first case, the misissue can be detected and the certificate revoked. In the second, the log signing key is revoked. In both case the certificate/SCT pair will no longer be accepted by clients.

The log contains the certificate plus the intermediates chaining it to a trusted root, but the client only verifies that the end certificate appears in the log. The client only needs to validate the end certificate because that is sufficient to check for revocation, which is all the client needs to know.
The log, however, needs to record the entire chain so that when a certificate is misissued it is possible to correctly assign blame. The reason the log does not sign the chain is that many CAs issue certificates that may be presented with multiple chains. Permitting validation of all possible
chains would bloat the log and complicate protocols for no particular gain.

Sizes

(Assuming SHA-256 as the hash function.)

The number of currently valid public certificates is estimated to be around 1.5M in 2012.

Data transmitted as part of a TLS handshake: 8 bytes timestamp + signature (<100 bytes for ECDSA, 256+ bytes for RSA)

Signed tree head (STH): 8 bytes timestamp + 32 bytes root hash + 8 bytes tree size + signature

Merkle proof: log2(size of tree) x 32 bytes + STH. For a tree with 1M certificates: 640 bytes + 8 bytes timestamp + signature.

Caching: assume for example a client that caches an intermediate node hash of every subtree of 256 certificates. Cache size for a tree with 1M certificates: 32 MB/256 + current signature ~ 128 kB. Merkle proof size: 8 x 32 = 256 bytes. A client will only need to request the rest of the proof +
signature if there is a mismatch at the cached node. It is also possible for many clients to cache the hash of every certificate it ever verifies so each certificate only needs to be validated once. For example, 1,000 certificates/day would be 12 MB/year.

Size of entire tree (end certificates only): assume certificates are, on average, 1 kB long, then the tree for 1.5M certificates is 1.5 GB. There is sufficient information in this data to reconstruct the STH, which would mean performing around 3M hashes. A MacBook Pro can do about 1,000,000
SHA-256 hashes a second, so this would take 3 seconds. Merkle trees are easily parallelised, so time can be reduced almost arbitrarily.

Certificate Transparency v2.1a

Ben Laurie (benl@google.com)
Emilia Kasper (ekasper@google.com) 

Tuesday 19 March 2013

A fingerprint reader and NFC E-Wallet. Powerful combination of payment and two factor authentication.


Apple is said to be planning to introduce an iPhone that can be unlocked by the owner's fingerprint. At the same time other manufacturers are thought to be experimenting with iris scanning and voice recognition, the Telegraph reports. 

According to the paper, speculation about Apple's plans for fingerprint recognition began last year when the iPhone maker bought biometric security firm AuthenTec for 235 million pounds. 

Earlier this month, KGI Securities analyst Ming-Chi Kuo said his firm expected the results of that takeover to be revealed this year with the new iPhone 5S. 

Samsung has had a "Face Unlock" feature in its phones since last year's Galaxy S III. However, the phone's camera would often unlock if it recognised a photograph of the owner. The manufacturer said the feature has been improved in the Galaxy S4, released in New York last week.





Wednesday 6 March 2013

10 classic mistakes that plague software development projects

Project management is never an exact science, but when you combine it with the vagaries of software development, you have a recipe for disaster. I have seen a fair number of common mistakes that project managers make when working with software development projects. Some of these mistakes are not exclusive to software development, but they are especially prevalent and damaging in that context.


1: The “pregnant woman” mistake

Fred Brooks illustrated a common project management mistake with his famous statement that just because one woman can have a baby in nine months does not mean that nine women can have a baby in one month. And we still see this come up time and time again — the idea that throwing more people at a problem can make it be fixed quicker. Sadly, this is just not true.
Every person you add to a project adds friction to the project as well –  things like the time needed to bring them up to speed or coordinate their work with other people. In fact, my experience has been that there is a tipping point where adding people actually slows the work down more than it speeds things up, especially for the first few months. And there are many tasks that just can’t be split up to be done by many people or teams at once. They simply have to be done “one foot in front of the other.”

2: The wrong metrics

Managers need metrics for variety of reasons: measuring “success” or status, performance reviews and analysis, and so on. The mistake I see too often is that the easier it is to collect a metric, the more likely that it’s not measuring anything useful. Of course, the easiest metrics to collect or understand are also the most likely to be used. Let’s take “bug tickets” as an example.
It is easy to count how many tickets get entered. But that is not a good measure of quality, because how many of those tickets are user error or truly “features”? So managers often look to the next level of metric: ticket resolution rate (tickets closed per day or week or iteration or whatever). If you have ever dealt with a help desk that constantly closes tickets for things that aren’t actually fixed, causing a proliferation of tickets, you know what it’s like dealing with an organization driven by this metric!
Instead of actually getting work done or helping the user (for example, leaving tickets open until the user accepts the resolution), the organization exists solely to open as many tickets as possible and then close them as quickly as possible, so it can get its resolution rate up. A better number would be the hardest to measure: ratio of true “bug tickets” created in relationship to features deployed, changes made, or something similar. Needless to say, that is not an easy number to understand or to collect and report on. The result is that organizations choose to make decisions based on the wrong metrics rather than the right ones, due to convenience.

3: Estimating times too far out

A common problem I see with certain project management methodologies is that they like to play “just so stories” with timelines and time estimates. Project manager who honestly think they know what pieces of functionality any given developer will be working on more than a month or two out (unless it is a very large, broad piece of functionality) are likely to be disappointed and mistaken. Software development is just too unpredictable. Even if you can prevent or account for all the usual things that alter timelines and priorities, there is still little guarantee that things will take the time you think they will.

4: Estimating times too broadly

Another typical issue with time estimates involves not breaking tasks down into small enough pieces. If I’m told that a piece of work will take one week, I’ll ask where exactly that number is coming from. Unless someone has analyzed all the minor pieces of work in the whole, a “one-week” time estimate is nothing but pure conjecture and should be disregarded.

5: Failing to account for tasks

How many times have you seen a deadline blown because it was established without accounting for a critical task like testing? That is another reason why you cannot and should not ever accept a task on a timeline that is not broken down into its component tasks. There is a chance that the estimate omits something important.

6: Poor communications

It is important to keep everyone in the loop on project status, but it is easy to forget to do it. This is where a lot of the mistrust between IT and the business team comes from: The business does not feel like it has a good handle on what’s happening with its projects. And the more it feels left in the dark, the more likely it is to start trying to micromanage or force things to happen the way it feels it should be done. You can mitigate this problem by letting people know where things stand, both on a regular basis and when milestones are accomplished or the status changes.

7: Disconnected business priorities

There is often a wide gap between the priorities of projects within the development organization, the priority of the project in the view of the overall business, and the priority of the project in the eyes of the requester. A common issue is that a “high priority” project for one department is not viewed as important by the business because it does not generate revenues, and so the developers also downgrade it. Everyone needs to be on the same page with priorities, and a large part of that is to ensure that business units are not evaluated on the basis of projects that the overall business considers lower priority.

8: Constructing a wall of process

When the development team feels overwhelmed, one of the natural reactions is to establish a lot of process to slow things down. I have worked at places where even the most simple of changes required a change request form to be filled out (on paper, of course), in triplicate, physically disseminated, agreed upon, cross-signed by managers, and after all of that, there was still a 45-day minimum time until the work was to be done! Needless to say, this organization was not seen as a “partner” or an “important lever for getting work done” in the business, they were seen as a cost center and treated as such. The wall of process is typically a stopgap measure to deal with deeper issues in the process or company’s culture, and while it is easier to put up the wall than to deal with those issues (and in some companies, the issues are irreconcilable), the wall of process is counterproductive and leads to a hostile environment.

9: The “hit-the-ground-running” myth

When adding people to a project, it is tempting to assume that they can hit the ground running. No one hits the ground running in the world of software development, and those who say they do are mistaken. Every project has an acclimation period, and the farther along the project is, the longer that acclimation period is — there is more and more code to understand and get a handle on. Failing to take this into account will get you into hot water. It may take only a few days or weeks for a developer to come into the project at the beginning, but it could take months for a developer to be fully productive when added to the project long after it has started

10: Multi-tasking

This is another “skill” (like “hitting the ground running”) that people think they have, but they really do not. The more you ask people to multi-task, the worse their work will be and the longer it will take. This applies to multi-tasking at the minute-to-minute level (juggling emails, phone calls, actual work, etc.) as well as the hour-to-hour or day-to-day level (handling multiple projects). The more you demand from people, the more the wheels fall off. To make it even worse, multi-tasking not only is likely to mangle the work, but it grinds people up and sends them looking for another job eventually… forcing you to bring in new people in the middle of a project and causing even more issues.
April 20, 2012, 4:01 PM PDT

Wednesday 27 February 2013

Mozilla Tightens Requirements for Digital Certificates (February 19, 2013)

Mozilla has updated its Certificate Authority (CA) Certificate Policy to lessen the risk of hackers getting their hands on subordinate CA certificates. 

Subordinate CA certificates are granted the same power as the CA, and they can be used to issue valid SSL certificates. 

Until now, subordinate CA certificates have not been subjected to the same scrutiny and controls as root CA certificates. 

The policy is being changed to reflect Mozilla's "belief that each root is ultimately accountable for every certificate it signs, directly or through its subordinates." 

Subordinate CA certificates issued after May 15, 2013 must comply with Mozilla's new policy; existing certificates have until May 15, 2014 to be updated to comply with the policy

Wednesday 20 February 2013

DNSSEC Adoption Growing in Government, But Unpopular with eCommerce and Finance

Although DNSSEC (DNS Security Extensions) technology helps prevent spoofing of websites, none of the top e-commerce companies or banking and financial services companies have deployed it fully. 

In contrast, two-thirds of US government agencies are using DNSSEC, although some of the agencies are signing their domains incorrectly. 

-http://www.theregister.co.uk/2013/02/18/dnssec/


I imagine that in the US in 1963 there were similar stories about the unpopularity of a change required to make delivery of physical messages more reliable. It was called the Zone Improvement Plan - and these days we routinely put ZIP codes on snail mail addresses without grumbling. Need to get over that hump with DNSSEC - and then use the freed-up energy to push BGP and SSL Certificate Authority security improvements up the next hill. 


Having implemented DNSSEC for a few domains, I found out first hand that it is very easy to "mess up" and render a domain non resolvable. Even some notable .gov sites (like for example fbi.gov) fell victim to badly configured DNSSEC in the past. On the other hand, attacks that involve DNS spoofing are rare and not considered a sufficient risk compared to the risk of downtime due to badly configured DNSSEC signatures. 

This may however change as more commercial DNS providers will offer DNSSEC as a service and as popular DNS servers like BIND make configuring DNSSEC easier. 

Tuesday 5 February 2013

Lucky Thirteen: Breaking the TLS and DTLS Record Protocols


Nadhem AlFardan and Kenny Paterson of the Information Security Group at Royal Holloway, University of London, announced a new TLS/DTLS attack called Lucky Thirteen. The attack allows a man-in-the-middle attacker to recover plaintext from a TLS/DTLS connection when CBC-mode (cipher-block chaining) encryption is used.


http://www.isg.rhul.ac.uk/tls/TLStiming.pdf

Friday 25 January 2013

Elliptic Curve Certificates and Signatures for NFC-enabled mobile phones

The Near Field Communication (NFC) Forum finalized its Signature Record Type Definition (RTD) to protect against manipulation of NFC Data Exchange Format (NDEF) data. The choice of digital certificate and signature type has a major impact on tag memory usage, cost and device performance. The Smart Poster RTD, gives example NDEF message sizes ranging from 23 to 69 bytes. With digital signatures and certificates this can balloon to over 1000 bytes, depending on the type of signature and certificate(s) forcing the use of larger and more expensive tags. 

The paper proposes further use of elliptic curve cryptography; specifically ECQV certificates and ECPVS signatures in addition to the ECDSA signature scheme. These technologies were designed with efficiency as a primary goal, and are well adapted to the constraints of NFC tags. For the same level of security, ECQV+ECPVS provides a 10 fold reduction in storage overhead compared to RSA signatures and certificates (from about 1000 to 100 bytes). 

Both ECQV and ECPVS are standards based, compatible with the NFC Forum Signature RTD and the ITU X.509 standard for Public Key Infrastructure (PKI). ECPVS can provide an additional confidentiality feature that allows portions of the data to be encrypted under a separate key. We introduce the reader to an NFC PKI architecture, scenarios for tag issuers, memory utilization and performance data for the various schemes specified in the Signature RTD.


Digital Signatures are necessary in providing trust in the NFC ecosystem where users are expected to make wireless connections to unknown readers, tags and peers. They provide the user with a level of comfort that the data they receive has been signed by a trusted third party and more importantly, prevent a bad user experience with a malicious tag. 

They can also accommodate almost any application scenario including coupons and tickets.

The signature RTD gives implementers choices for digital signature and certificate types. With modern processors found on smart phones the choice of signature type does not impact performance as signing and verifying is less than 10ms. However, the choice does have a major impact on memory utilization. ECDSA uses approximately 50% less memory than RSA and can fit on most tag types. If we utilize ECDSA with ECQV certificates we use 90% less memory than RSA.

Signatures with message recovery such as ECPVS and keyed ECPVS can be used for message confidentiality where needed. If there is enough demand for confidentiality then the NFC forum can easily add this signature type to the Signature RTD given its extensible design.