16: More on "Going Dark"

Announcements

Exam 2 in class in one week! Please be on time and seat yourself away from others. All topics since the last exam are fair game (along with whatever that pulls in from before).

Late days! In short, assignments have a due date. There is also a "late due date" which you can submit by, but Hadi and I are going to provide less (or no) support after the official due date. More details on Piazza.

The rest of the semester

After the exam we have only a few topics left, and most of them will be more high-level than what we've been focusing on for the last several weeks.

The next (last!) few assignments will look like the following:

  • A version of istat for NTFS, though simpler than TSK's. You'll have two weeks for this one, and you'll need it.
  • A "mystery" assignment. I'll provide a disk image and scenario (like adams.dd) and ask you to produce a forensics report about it. You'll be graded not just on the evidence you find, but the explanation of the tools and techniques and your reasoning.
  • Another written assignment or two.
  • (possibly, if I can make it work) a lab-exercise-like assignment where you'll use Volatility to perform a memory forensics task, like recovering a password from a memory dump of running copy of Windows.

From last class

The 4A requires that law enforcement (executive branch) go to a court (judicial branch, so powers are separate and presumably checked), swear out a warrant application (perjury for lying), showing probable cause (not proof, but "where the facts and circumstances within the officers' knowledge, and of which they have reasonably trustworthy information, are sufficient in themselves to warrant a belief by a man of reasonable caution that a crime is being committed" (Brinegar v US)) and particularly naming the person/place to be searched and thing to be seized.

Mandated access

CALEA (Communications Assistance for Law Enforcement Act) requires telecom providers to be able to assist law enforcement. It's a law that requires certain powers, in other words:

The Act obliges telecommunications companies to make it possible for law enforcement agencies to tap any phone conversations carried out over its networks, as well as making call detail records available. The act stipulates that it must not be possible for a person to detect that his or her conversation is being monitored by the respective government agency.

For communications that pass through centralized telecom companies, CALEA more or less does its job, at least at the network level. Switches that handle voice communications have intercept capability built-in. (And this shouldn't be surprising. Think of the history of telecoms: https://en.wikipedia.org/wiki/Switchboard_operator)

Routers carrying IP traffic must also comply (by ruling of the FCC), as must centralized VOIP providers. IP routers "delegate the CALEA function to elements dedicated to inspecting and intercepting traffic. In such cases, hardware taps or switch/router mirror-ports are employed to deliver copies of all of a network's data to dedicated IP probes."

The problem

Once the capability for monitoring exists, it can be used, and not always lawfully, fr example, the Greek Watergate affair: https://en.wikipedia.org/wiki/Greek_wiretapping_case_2004%E2%80%9305

Bellovin et al note that CALEA-like interfaces are problematic because they are designed specifically for surreptitious eavesdropping, unlike more typical network monitoring that logs and alerts.

They argue that broadly requiring CALEA-like interfaces on not just network-level protocols but all application-layer protocols is a recipe for disaster. Let's go over their argument.

Because the Internet is (or can be) decentralized (any computer can act as a server), they argue that wiretapping capability would need to be widely distributed. In some sense it already is, at the IP layer, but their argument appears to be that any server provider not just provider of Internet connectivity, would need to comply with CALEA. They assert this is "untenable." Specifically, they point out that Internet startups are diverse and dynamic, and that forcing them to integrate a complex wiretap protocol over "quickly deployed and poorly debugged services" would be an expensive burden on small companies.

P2P, they similarly argue, doesn't accommodate the CALEA model, as there is no centralized entity to regulate (though one wonders about how true this is, given how effectively some p2p systems have been shut down in the past when the company that developed the main version of software was shut down: see https://en.wikipedia.org/wiki/LimeWire for example.)

Their best argument, IMHO, is that expanding the number of CALEA-like interfaces in the network would create great insecurity. The vulnerabilities in every CALEA-compliant switch tested by the NSA show how hard it is to get the interception technology correct.

The proposed solution

Bellovin et al. argue that the FBI should instead leverage the "essentially unlimited number of security vulnerabilities" in modern computing and communication devices.

General criminal compromise of computers is wide-ranging and non-specific. Unpatched computers (more typically: services) might be remotely exploitable or not, and criminals do not typically target their attempts at compromise (depends on the criminal and goal, of course).

LE tools must be targeted. They must be likely to succeed. They must not disrupt services (of target or others). And they must be manageable: it should be easy to check if the tool worked, be able to control it during monitoring, and be able to clean up when done.

Four primary components:

  • selection or discovery of vulnerability
  • installation
  • obtain access after installation
  • obtain communications

And, how reliable is the data that's gathered? Judges must believe that the tool gathers only and exactly what is cited in the warrant. Tools that undercollect might miss exculpatory evidence, and overcollection will violate the warrant's limits. (This differs from minimization, where LE must make take reasonable steps to ensure collecting only communications of the subject, and only when they are committing a crime.)

We've seen this already in NITS, where FBI discovered and used an exploit that they served to the Tor browser to recover information from users. More generally, any computer connected to the internet that might be remotely exploitable could be monitored in this way. Though discovering 0days might be hard.

Finding vulnerabilities

One could imagine a LE lab that was federally funded to find LE-grade exploits in apps and platforms (e.g., Windows or iPhone). Or LE could purchase exploits on the vulnerability market (though clearly there are ethical issues here -- are they creating or encouraging an illegal market?).

Vulnerabilities remain useful until they are disclosed, patched, and the patch is deployed widely. (How long is that time period? Who knows?) There is also the issue, as we've see in NITS, that the defense might want the exploit for whatever reason, and by giving it to them, LE will lose access to a tool.

Relatedly, should the government disclose vulnerabilities after a set period of time (maybe even 0 time)? The ethical calculus is not clear here: having a tool to do lawful wiretapping, vs pre-empting crime that might be committed using the vulnerability by third parties.

Finally, this lab would be a nightmare to secure: what a rich target! Not unlike NSA/CIA/etc., it would almost certainly need to be air-gapped and otherwise highly secured -- which would only serve to slow its progress and make it more expensive to operate.