15: "Going Dark"

Announcements

Midterm Exam 2 is in two weeks.

The rest of the semester

After the exam we have only a few topics left, and most of them will be more high-level than what we’ve been focusing on for the last several weeks.

The next (last!) few assignments will look like the following:

  • A version of istat for NTFS, though simpler than TSK’s. You’ll have about two weeks for this one, and you’ll need it.
  • A “mystery” assignment. I’ll provide a disk image and scenario (like adams.dd) and ask you to produce a forensics report about it. You’ll be graded not just on the evidence you find, but the explanation of the tools and techniques and your reasoning.
  • Another written assignment or two.
  • (possibly, if I can make it work) a lab-exercise-like assignment where you might use Volatility to perform a memory forensics task, like recovering a password from a memory dump of running copy of Windows, or perhaps an image-classification task based on neural nets, or something else “interesting.”

The problem of “Going Dark”

Law enforcement, particularly at the federal level, has a problem. Forensics and investigations are getting harder, despite the increased digitization of evidence. Imaging and storing drives and data is more difficult as:

  • we all have more of it (how much storage do you have with you today?)
  • and FDE is getting easier and more automatic
  • filesystems and file formats continue to diverge and converge (HFS+->APFS, NTFS extensions, gazillion Unix filesystems, filesystems on phones)
  • same with physical interfaces (phones are getting better, but feature phones are still a huge headache)
  • data might not even be local: it’s in the cloud, whatever that means.
  • RAM and hardware forensics are really hard

Lawful (that is, court-authorized) domestic wiretaps of both voice and data are getting harder to execute:

  • encryption
  • peer-to-peer protocols
  • providers outside the US

The de facto loss of wiretap and investigative powers is sometimes referred to as “going dark.”

Privacy law

Wiretaps are a particular power authorized and used by our government. Normally our privacy is protected from government (not other) snooping by a set of protections: the 4A, surrounding case law, and various statutes that constrain government action.

Originally the 4A was a reaction against general warrants, where officers of the law (the British crown!) could search, essentially, whatever wherever whoever they deemed necessary.

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

The 4A requires that law enforcement (executive branch) go to a court (judicial branch, so powers are separate and presumably checked), swear out a warrant application (perjury for lying), showing probable cause (not proof, but “where the facts and circumstances within the officers’ knowledge, and of which they have reasonably trustworthy information, are sufficient in themselves to warrant a belief by a man of reasonable caution that a crime is being committed” (Brinegar v US)) and particularly naming the person/place to be searched and thing to be seized.

Wiretaps (real-time interception of the contents of telephone and computer communication) require a “super warrant,” aka a “Title III order.” Investigators must show probable cause that the wiretap will reveal evidence of one of a particular set of crimes, and must further further show (1) normal procedures have been tried and failed (or would be too dangerous to try), (2) probable cause the communication facility is involved in the crime, and (3) that surveillance methods will minimize interception of communications that do not provide evidence of a crime.

If we grant that law enforcement has the right, post-warrant, to wiretap in order to investigate crimes, then this should concern us. What to do?

Mandated access

CALEA (Communications Assistance for Law Enforcement Act) requires telecom providers to be able to assist law enforcement. It’s a law that requires certain powers, in other words:

The Act obliges telecommunications companies to make it possible for law enforcement agencies to tap any phone conversations carried out over its networks, as well as making call detail records available. The act stipulates that it must not be possible for a person to detect that his or her conversation is being monitored by the respective government agency.

For communications that pass through centralized telecom companies, CALEA more or less does its job, at least at the network level. Switches that handle voice communications have intercept capability built-in. (And this shouldn’t be surprising. Think of the history of telecoms: https://en.wikipedia.org/wiki/Switchboard_operator)

Routers carrying IP traffic must also comply (by ruling of the FCC), as must centralized VOIP providers. IP routers “delegate the CALEA function to elements dedicated to inspecting and intercepting traffic. In such cases, hardware taps or switch/router mirror-ports are employed to deliver copies of all of a network’s data to dedicated IP probes.”

The problem

Once the capability for monitoring exists, it can be used, and not always lawfully, fr example, the Greek Watergate affair: https://en.wikipedia.org/wiki/Greek_wiretapping_case_2004%E2%80%9305

Bellovin et al note that CALEA-like interfaces are problematic because they are designed specifically for surreptitious eavesdropping, unlike more typical network monitoring that logs and alerts.

They argue that broadly requiring CALEA-like interfaces on not just network-level protocols but all application-layer protocols is a recipe for disaster. Let’s go over their argument.

Because the Internet is (or can be) decentralized (any computer can act as a server), they argue that wiretapping capability would need to be widely distributed. In some sense it already is, at the IP layer, but their argument appears to be that any server provider not just provider of Internet connectivity, would need to comply with CALEA. They assert this is “untenable.” Specifically, they point out that Internet startups are diverse and dynamic, and that forcing them to integrate a complex wiretap protocol over “quickly deployed and poorly debugged services” would be an expensive burden on small companies.

P2P, they similarly argue, doesn’t accommodate the CALEA model, as there is no centralized entity to regulate (though one wonders about how true this is, given how effectively some p2p systems have been shut down in the past when the company that developed the main version of software was shut down: see https://en.wikipedia.org/wiki/LimeWire for example.)

Their best argument, IMHO, is that expanding the number of CALEA-like interfaces in the network would create great insecurity. The vulnerabilities in every CALEA-compliant switch tested by the NSA show how hard it is to get the interception technology correct.

The proposed solution

Bellovin et al. argue that the FBI should instead leverage the “essentially unlimited number of security vulnerabilities” in modern computing and communication devices.

General criminal compromise of computers is wide-ranging and non-specific. Unpatched computers (more typically: services) might be remotely exploitable or not, and criminals do not typically target their attempts at compromise (depends on the criminal and goal, of course).

LE tools must be targeted. They must be likely to succeed. They must not disrupt services (of target or others). And they must be manageable: it should be easy to check if the tool worked, be able to control it during monitoring, and be able to clean up when done.

Four primary components:

  • selection or discovery of vulnerability
  • installation
  • obtain access after installation
  • obtain communications

And, how reliable is the data that’s gathered? Judges must believe that the tool gathers only and exactly what is cited in the warrant. Tools that undercollect might miss exculpatory evidence, and overcollection will violate the warrant’s limits. (This differs from minimization, where LE must make take reasonable steps to ensure collecting only communications of the subject, and only when they are committing a crime.)

We’ve seen this already in NITS, where FBI discovered and used an exploit that they served to the Tor browser to recover information from users. More generally, any computer connected to the internet that might be remotely exploitable could be monitored in this way. Though discovering 0days might be hard.

Finding vulnerabilities

One could imagine a LE lab that was federally funded to find LE-grade exploits in apps and platforms (e.g., Windows or iPhone). Or LE could purchase exploits on the vulnerability market (though clearly there are ethical issues here – are they creating or encouraging an illegal market?).

Vulnerabilities remain useful until they are disclosed, patched, and the patch is deployed widely. (How long is that time period? Who knows?) There is also the issue, as we’ve see in NITS, that the defense might want the exploit for whatever reason, and by giving it to them, LE will lose access to a tool.

Relatedly, should the government disclose vulnerabilities after a set period of time (maybe even 0 time)? The ethical calculus is not clear here: having a tool to do lawful wiretapping, vs pre-empting crime that might be committed using the vulnerability by third parties.

Finally, this lab would be a nightmare to secure: what a rich target! Not unlike NSA/CIA/etc., it would almost certainly need to be air-gapped and otherwise highly secured – which would only serve to slow its progress and make it more expensive to operate.