This rough transcript has not been edited and may have errors.
Deirdre: Hello, welcome to Security Cryptography Whatever! I’m Deirdre.
David: I’m David.
Thomas: Not it.
Deirdre: Who are you? Cool. And our guest today is Eric Mill, who is currently at OMB, the Office of Management and Budget, right in the United States executive branch. Hi Eric, how are you? Yay.
Eric: Hello. Thanks for having me
Deirdre: Who are you of the
Eric: Who is this guy? So, yeah, Eric Mill. I work for the Office of Management and Budget. That’s what OMB stands for. I’m constantly having to describe that to friends and family. So it’s a part of the Executive Office of The President (EOP), which is what people typically refer to as The White House.
And we’re the part of EOP that focuses on agency. On how agencies work. Obviously, the budget is a big part of that. It’s also the management side of it, which is essentially where a lot of policies and oversight and other interesting interventional stuff happens over agency operations. My part of OMB is the Office of the Federal Chief Information Officer.
That’s who I work for. The Federal Chief Information Officer, Claire Marana. Our office does technology and cybersecurity policies for how agencies work. There’s an obvious intersection with how their public facing stuff happens as well. But we really focus on like policy for agencies.
So for the folks in this audience who might be aware that about seven, eight years ago, there was an HTTPS policy, that came out of the Obama White House around that time, that mandated HTTPS for federal web services. The signature on that policy was the Federal Chief Information Officer. That’s the sort of thing that this office does.
Thomas: You didn’t spend your whole career in the government, right? Like you’ve been elsewhere.
Eric: Yeah. I have been elsewhere. The ratio is starting to add up to a lot of government time, but I’ve been a software engineer and web developer for a long time. I was somebody who really got into making websites in 1997 when terms like DHTML were still tossed around.
I could entertain people in my high school library by showing them how I can make a page that made the screen flash colors and stuff. I went to college for computer science, to get a bachelor’s, I went to Worcester Polytechnic Institute.
I couldn’t really imagine doing anything else, eight hours a day. Like the concept of having a full-time job was very dreary and solving puzzles for fun seemed like a pretty good way to spend it. So I signed up for CS degree. This is back in 2001, my freshman year, which was when the Dot Com bubble burst.
And, so a lot of people were fleeing that major and I hung tight to it. Cause I couldn’t imagine doing anything else. And so graduated in 0 5 worked, out in the private sector for a little while. I worked for a small consultancy called Thought Bot, which made what at the time in 2006 was a really big bet on Ruby on Rails, which was not, whose future was not assured at the time.
And I took a big pay cut to do it. Cuz Ruby just seemed really cool to me. And you can make fun of me for that later, as you wish. I, so just spent a few years there just becoming a functional software engineer, ended up then moving to a nonprofit called the Sunlight Foundation in Washington, DC.
There’s a brief there where I worked for a sort of politically focused democratically leaning, digital consultancy called Blue State Digital on their Obama campaign work. and then after that ended up moving to DC and working for the Sunlight Foundation, which was, does not exist anymore. But at the time was a sort of 50 person strong nonprofit with a 15 person engineering team and a few designers doing all sorts of different things.
and so I was there for about five years before I jumped into the government around 2014, when the various digital service teams that now exist were starting up. So that’s, did that also for another four or five years. that’s where I started getting more formally into policy work and Then ended up working in the Senate for a little while after that.
Eric: And then just did a year again, out in the private sector before this, where I was the lead PM for the Chrome Browser for security.And then I just jumped back into government again when this came around. So that’s the background there.
Deirdre: We’re talking to you today because there was a memo put out by your office in January with the title, "Moving the US Government towards Zero Trust Cybersecurity Principles. And for anyone who’s been like vaguely paying attention to what the government recommends about how having to secure systems, whether federal or, in the United States, this was one that actually was like, oh, these are all good recommendations.
And not just like scraping the floor with bare minimums. And I think we all started to read it and we
Thomas: what Deirdre says, or for anyone who’s selling a security product, they definitely notice this. They definitely notice the hell out of this
Deirdre: Yeah. So Thomas read it in depth. The rest of us glanced at it.
David: We’ve all read it, but I think before we get started, it would be good to hear in the like two paragraph summary of the old state of the world and in the state of the world that this memo is recommending before we dive into details.
I guess that kind of starts with the term zero trust, but let me just give a little bit of the right the scenario that was kinda leading up to this, right? So this is done under the EIS of the cybersecurity executive order that President Biden signed, within a few months of taking.
Eric: that was, clearly, partly in response to, some of the compromises of federal agencies that, are often referred to in shorthand as solar winds, to involved the compromise of that vendor. Obviously there’s some other stuff wrapped up in that too. There was also shortly before issuance of the order too.
There was the Colonial Pipeline ransomware attack that a lot of people felt and that real people around the country noticed, some in their day-to-day lives. And that executive order asked for a whole bunch of things. And my office, the Office of Management and Budget, has been responsible for a significant number of memoranda and policies to implement pieces of that.
So we’ve also done other memos around, increasing logging the, data within these different systems, endpoint detection and response. there’s some stuff going on now around secure software development. but one of the big things that it called for was moving agencies towards zero-trust principles and zero-trust is one of those words that can mean a lot of things to a lot of people, as I, I know Thomas was getting at and,is certainly true.
And I think, if you read what we were trying to do with this, and if you dig into, how we frame it the goal of this is to really take least privilege and the concept of not having implicit trust seriously, and really taking that to its logical conclusion. So there’s a big emphasis on moving away from, for example, a network perimeter based approach.
Eric: It’s not the first time this concept has come up. It’s not the first time it’s been involved in government documents. It was, certainly talked about in some of these terms after the OPM breach six, seven years ago. But I think it is fair to say that it has not been, taken to this degree of some significant things are going to change over time.
it’s very common today in federal enterprises for you to log, to start your day, by putting your PIV card into your computer and logging into your intranet. And once you’re in there, you can pretty, pretty easily start logging into other things and doing work. And, one of the things that we try to hammer home there a lot is in mature environments, you are logging into the application layer.
you are, you’re doing,there’s still single sign on, but you’re doing this at layer seven and not layer three or four. that, that, what that constitutes is moving that perimeter closer to the information that’s being protected and also giving you a much more semantically, rich environment to work in because ultimately zero trust is about least privilege about take, combining information about the user, the time, the device, the nature of the application, what of these are anomalous, if anything, be having more information to synthesize at your disposal than you than many enterprises, especially those that aren’t gigantic companies with infinity engineers, have available to them.
Eric: So those are big changes. And I, really taking the concept of no,when. If you’re just sending traffic unencrypted around your internal network, you are putting tons of implicit trust in that internal network. And that’s not what this is about. So even though some people might say HTTPS is getting a little passe at this point, we know that it’s very important to do that everywhere.
Thomas: Not just out in public, but like inside your internal organization. I guess like a first question I have, just to lay the groundwork here, Is, so this is an OMB memo and the OMB governs the way that all the agencies under the executive branch actually operate. So what’s the force that an OMB memo has. What if,is it possible for an agency to read the OMB memo and say, this is all interesting stuff, but here at the department of fisheries and wildlife or whatever, we’re just gonna keep doing things our way.
Eric: So the office of management budget, we published a lot of memos, right? And these memos are binding there’s statute to back that up. I have had the experience of working before in the government. when I was particularly when I was at GSA, I mentioned the HTTPS policy earlier. That is a policy effort that I think was successful for what it did in the federal government.
And my experience was that it was successful. Not because it was perfectly crafted, but because years of private, persistent, Multi-stakeholder follow through ensued to have that actually happen. there were people who cared about it, who answered as many emails as were necessary to resolve questions.
there was public material to help folks do that. There was ample public engagement and private engagement to see this stuff through. And one of the things we’re really trying to do here, Iit’s a tricky thing because this is a big enterprise security overhaul. This isn’t just like deploying a protocol, This is, a major change to how an organization functions. That is a multi-year project. And so how do you maintain urgency, structure? how do you not let the energy drop out of the room? And so we have taken a few routes to try and do that. Part of that is how the memo itself is structured.
So there is a mix of places where there is flexibility and that sort of, that multi-year timeline is explicitly contemplated. And we’re asking you to come back with a plan for, are you gonna do. Some of those, there’s actually like more short term. what it means to be on a path to do this in three years is that you’ve done this basic thing in the first year.
And for example, there’s a couple of sort of MVP style things in there where we’re saying like zero trust is about combining a lot of signals, but we will really wanna see you get at least one signal relating to your, to the devices that your employees have. And we wanna see that connected to your authorization authentication decision in some way in the first year.
Eric: And we think that is a re that will get you over some barriers by forcing yourself to do that. That will be relevant for the rest of the time. Similarly, we have a thing in there about requiring like a significant internal system being prepared for direct internet accessibility to match the threat model that we are talking about here, where things should effectively be treated as internet connected.
And that. Doesn’t mean that an agency is necessarily gonna do that to their entire organization, but that being able to do that and figuring out a security
Eric: strategy and a set of tooling, a set of processes that like that can get everybody comfortable with doing that is gonna be important. The other area in which we’re doing this is in which we’re trying to sustain this over a few years, is we’re really taking these plans that we have asked for very seriously.
So agencies have submitted to OMB and to the DHS cybersecurity unit who CISA, the Cybersecurity Infrastructure Security Agency that we work very closely with. They’ve submitted those plans to us and we’re, reviewing those plans and we’re gonna be talking with everybody about them and we’re gonna be taking, there’s gonna form the foundation of a lot of our follow through and oversight work in the years to come.
And that is separate from a bunch of other stuff that we’re also doing, where we routinely have, seven, 800 person strong gatherings of the federal IT community to talk with us about some of the issues they face raise questions and ultimately provide input into how we go about providing further guidance to them.
So there’s a whole host of things that are, sometimes don’t necessarily come across when you just read a fancy PDF with a signature at the end of it. But one of those things about doing policy of any kind is that issuing the policy is the beginning of the work. And not the end, even though a ton of work goes into making that policy.
And we’re really trying to embody that with
Thomas: And there’s there’s a lot of agencies, right? Like you think of the top level cabinet things, but there’s like hundreds of them, right? Like the Susquehanna river basin agency or whatever. and these are For the most part, they’re all
separate. it organizations.
Eric: Yeah. So there’s many agencies, There is the cabinet. And then there’s all these small agencies that would not meet some people’s definitions of small, but in the federal government are considered small there’s micro agencies and they really run the gamut.
There are agencies with. Single double digits of people working for them, right? There are also agencies that have
Eric: specialized functions or may not have even a full-time it person ranging up to cabinet size agencies that have what you and I might consider, many, almost independently functioning organizations inside them.
like the department of commerce contains the entire census bureau and NIST and NTIA and all these things that a lot of people engage with as if they were, their own agency, but they are in fact part of the department of commerce and report up. and they’re a real functioning unit too, but these are big sprawl.
Eric: The federal government is a sprawling enterprise. That’s exactly right.
Thomas: Okay. The me the memo itself, like it’s a big memo, right? there’s a lot going on. And I don’t know if this is a critique of it or just like an observation, but it reads as if, somebody took a look at the,the, the whole sprawling gamut of different agencies and different it organizations and like wish cast the ideal kind of, enterprise security organization, in some detail for all of them, like it’s a whole architecture, right?
I think the top line thing that people have taken from it. And I think that the thing that probably made the most news was there’s a sort of implicit remediation of VPNs in it, right? the it’s an old problem in enterprise security where people are running these kind of these network architectures where the way you get access to things is to get on the VPN.
And then once you’re on the VPN, you’re on an internal network and in our industry, in our field, if you’re on somebody’s internal network, that’s almost always game over, right. once you’re there. there’s nothing that’s gonna prevent you from escalating privileges all the way to whatever the maximal privilege is. And it.
seems like the goal of this, one of the primary goals of this is to make these organizations more survivable. So
that they’re not dependent on simply having,you put your PI card in, you get access to the network and now you have access to literally everything in there, right?
everything now is authenticated through an IDP through a single sign-on system and there’s device at a station and all that. But it’s pretty ambitious, right? Like most enterprises are not operating at this level of sophistication. Isn’t even the word I
Thomas: would use. I would use coherence is the term, right?
Like it’s a very coherent view of how enterprise security should work. That is, I would say, it sets the bar past where the enterprise, like where industry sets that bar right now.
Eric: I think there’s a lot of truth to what you’re saying here. So there, there is absolutely a deliberate strategy fear here of, I describe what preceded this cybersecurity executive order, And it’s not the first or second time that the us government has been punched in the face by a significant adversary.
And there’s only so many times. That you can do, you can have that happen to you right before you, you really, have to not just focus on triaging incrementally to something that’s a little bit ahead of where you are. Right there. There, there is a time where you have to lay out. Like we have a lot of work ahead of us and this is something like what it looks like.
there’s definitely lots of room for some flexibility in there into how agencies choose to architect their work. And if you dig into the whole, nest zero trust, architecture, special publication, it goes into know a lot of detail about that. it’s a great publication, but you’re right.
Eric: That there are some things here that are very different than how federal agents operate. And, and in fact, how a lot of industry operates, this is something that I, and, other folks in our organization and in the administration have been talking about with this a lot is that this isn’t a case of the government running 10, 15 years behind industry.
And this particular thing, this is something where to a large degree, like we are all in it together. And there are tools out in the space that are still maturing. There’s a lot of friction still to just doing all of this stuff in a big enterprise and, you know, that’s understood, but it, I, I think it really can’t stop us from action and from some understanding of what we need to do.
David: It’s definitely a, a very ambitious plan. I think that even the three year timeline, there would be people at large, nominally call ‘em tech, first tech companies, squirrel stage startups that if they got some of these requirements in the three year timeline, they’d be like, holy crap. I don’t know that I can pull that off.
Thomas: And personally, I think that’s great. kind of like, we can summarize.
what the architecture is, right? there’s a kind of a core idea. And Eric, I’m saying this so you can correct me. So jump in when I’ve got something wrong. Right. But there’s there’s a core idea of all of these applications that like any system that’s running
Thomas: inside of an agency presumably exists to support some application that they’re actually using to, you know, serve their mission.
Right. So for all of those, core applications, Idea is they’re no longer gonna be parked on internal networks where you get, you know, a key to the backstage network. And that gives you access to The, application. Instead, all these applications are gonna be not like because they need to be on the internet, but like they’re
gonna be their, their security posture is their internet accessible.
And the way you control access to all these applications now, is through single sign-on through an IDP, which is a, a good thing. Everyone should be doing that. And it’s that, that probably is where the industry is heading because of that architecture. And because I think you’re probably working to correct. A lot of bad practices that came about from people hiding things behind. VPNs. There’s an idea that. we’re not doing any kind of protection or enforcement based on VPNs anymore, that these systems, if you’re running it, you know, the VPN can’t be what’s protecting it. Something else has to be protecting it. in addition to that, there’s pretty strong recommendations that in addition to you know, single sign on, through an IDP with fishing proof, multifactor access, which is all like motherhood and apple pie, right. that you’re gonna get device signals, right? Like that, one of the things you’re trying to do with this architecture is not just prove that you know, in theory, it’s the person themselves that are requesting access to the application, but
that they’re doing it on a machine that you can somehow trust is you know, up to spec or whatever.
It goes further than that though. Right? there are prescriptions in the memo about security testing. this is a big thing where I’m gonna have follow up questions. Right. but, there’s language in there about moving all of these applications and all the federal agencies to a point where they welcome external. security testing, where they have a published VDP. Effectively it’s like saying we’re. gonna have a bug bounty for everything inside of the federal government without the bounty power. Right. But you know, an overt authorization. for people to do security testing there’s stuff in there about if we’re keeping secrets, we’re gonna have, the OMB is gonna come up with a document classification process somehow.
Right. You guys are gonna figure out what sensitive data is. And then for access to that data, there’s there needs to be, you know, KMS or vault style systems where the, the specific guidance is that even if the application, itself is compromised, you’ll at least have. Logging that these, you,
know, these particular documents were accessed, which you can, see why
you would want that.
Right? Because after solar winds, you’re like wondering what the hell our adversaries actually had access to in the first place, but a knock on effect of that, is that the way that you provide that, the way that you do, you know, I have an audit log of people who access something. Even if the application is compromised, is it’s protected by something like a KMS or you know, a vault or something like that.
It goes on with logs and things like that. So it’s, it’s pretty comprehensive. There’s mandatory encrypted DNS. I was gratified to see that it. DNS over HTTP and DNS over TLS, which is awesome.
like we saw, we see a video feed of you when you come on the podcast, right?
Like in the days leading up to this, like I was making wise cracks, not about you, but about the US government in general, and Alex Gaynor, a friend of the podcast who also works in the government, it’s have you ever considered working for the, you know, the federal government? And it’s there are reasons I wouldn’t, like the fact that I don’t wanna move to DC, but then I saw you pop up on the podcast in a, in a jacket and tie and I’m like, I’m never working in this environment.
Deirdre: the American flag behind you. The.
Eric: Thomas. You absolutely should go work for the US government. I’m happy to say that to everybody on this podcast that you should, and by the way, I, yes, you’re right. You, I have a video feed of me and you’re seeing me in a suit and tie, which is not representative of, most of the time that I’ve spent in the us government.
And I, yeah. Anyway, we don’t need to go into that, but,
Thomas: like start, start here is encrypted email gonna happen?
this says encrypted email traffic. This is not the same as GPG encrypting your emails, right? It’s it, it
that’s right. It’s, it’s encrypting email traffic in transit. That’s the subject to that whole section. Right? So, again, if you, if you take a look at the cybersecurity executive order. Which for the, the careful reader, is executive order 14028. It, it talks about just encrypt, I mean, encryption in transit in general.
Eric: And, and that is a big push right now in the federal government to button up on encryption in transit. And what we are trying to focus on in that part of the memo are some places to really make sure we prioritize and where there are some specific things that we’re probably gonna have to resolve in order to do.
Right. So with HTTPS, there’s a focus there on browser preloading because that is an, is a pretty incredible forcing function for making sure that you’re tackling both, essentially I’ll maybe I’ll misuse this term, but essentially split level DNS, you have a ton of agencies who hang internal host names on their public domains.
And there is actually a way to externally validate some enforcement of that for browser based HTTPS, at least with preloading. So that’s a particular angle that we’re, we’re dealing there. And the us government has been preloading dock of domains as they’ve been registered for some time. And this is a bigger retroactive push.
And with DNS, I, I think it’s pretty well understood that this is still a thing that, mean, there’s still work being done to do this. DNS over HTTPS just was built natively into windows for windows 11. Right? So this is something where we know we need to do this. And we know, and in fact, za has been standing up protective DNS, which supports this, right?
There’s, there’s a, there is a big push to get this stuff in place and to start using it. And we need to drive to the point where it is the norm. And it is encrypted at for as many legs as the US government can, is in control of. And then with email, I think there’s probably a wide understanding of the folks in this, on this podcast and among readers that email encryption is pretty fraught and that it’s fundamentally still opportunistic in the way that it is deployed on, on over the internet.
And what you see there is a request of CISA, a and FedRAMP and others to work on this, right. And it’s not, it’s not spelling out exactly what the solution is, but it’s saying these are, these are the things that we’re gonna have to tackle if we actually want to encrypt this stuff in transit.
David: This is where I feel compelled to plug the paper I wrote back in 2015 or 2016 called "Neither snor nor Snow, Nor Man in the Middle" about STARTTLS.
Thomas: like the external security testing thing, the VDP stuff, is that a marked shift from the way the agencies operated before? is this, is this a big thing, a thing I should be thinking a lot about? Or is this just like a, like a formalization and a continuation of a, kind of a tat policy that already existed?
Like how big a deal is the, the VDP stuff?
I guess it depends on how you look at it because there has been some policy on the books around VDP, from OMB and from CISA as well through their, there’s a directive to agencies, CISA publishes mining, operational directives. And one of the ones that they published, I guess a couple of years ago now was, requiring the instantiation of vuln disclosure policies.
Eric: So there is something there, but it is very new and to, to most of the federal government, I think many folks may be familiar with what the department of defense did with Hack the Pentagon some years ago and, and some other bounty since proud to say that at GSA, we also had the first civilian bug bounty and vuln disclosure policy some years ago, but it is still quite new.
So one of the things that we’re trying to dig at with that application security section with vuln disclosure is really presenting this concept of there’s multiple rings here of security that need to be brought to bear all the way from bringing in some seriously expert people, which is, you know, a heavy overlap with probably the re listeners of this podcast and right.
And having them take on a, on a sort of application by application basis, do their best effort to subvert its expectations. The way that our adversaries go about doing this. All the way towards, you know, there, there’s a pretty huge existing body of security analysis that is part of the federal government’s standard operating procedure.
And then including even external, making sure that we’re able to hear what folks on the outside can see and want to tell us. I’m certainly comfortable talking a little bit about like before this was a thing across the federal government, you know, it was not unheard of for people to report serious vulnerabilities to the federal government by, you know, having somebody they trusted in a random unrelated agency, you know, essentially proxy their report.
Over, you know, using contacts of that person of had inside the government to get that done out of not knowing how to report it. And also a fear of what it means to report that stuff to the federal government and a worry that the conversation will focus more on how you found it than what the severity is.
And, and so there, you know, there has been certainly a multi-year effort to change the norms around that and create these, these open doors. And we really do view it as a critical aspect of what it means to take seriously the concepts and principles that are part of the zero trust strategy and just part of modern enterprise security.
Thomas: So like brass tacks on this, Right.
So I’m I’m a vulnerability researcher. That’s my background, right?
And I interact with kind of government systems every once in a while. And you know, and most of the government systems I interact with are like at the state and county level here. Right. And you can imagine they’re like state and county it systems.
They’re not like, Google didn’t build them. And, you know, they’re, they’re junky and lots of different ways as a vulnerability researcher, I would be terrified to poke around any of those systems looking to see just how jenky they were. Because I can just imagine the, I, I can imagine the response I might get from, you know, an admin there or how that might escalate.
Is it safe? I. They might do. Yeah. Yeah, exactly. Right. So is it, is it safe now? And if it’s not, will it be safe with within the next, I don’t know, year or two years for a vulnerability, researcher like . me, but, you know, good. to, if they’re like just randomly poking around, looking for dot gov systems to go look for pre-auth SQL injection.
Right. Is that a safe thing to do now? And would it be safe in the near future based on this.
Eric: So, you know, Thomas things have changed a lot and it’s not just something in the government, right? Or it’s certainly not just something in the federal government. one of the things that I worked on when I was in the Senate in 2019 was vulnerability disclosure for election systems, which involved mostly state and local and county systems.
Like the kind you’re terrified of poking into. I spent a lot of time dealing with researchers who were in fact doing that and trying to get things reported and fixed. And that did lead some states to some secretaries of state, to publish vulnerability disclosure policies, of for the first time in that space.
There’s other areas that are historically litigious that have calmed down substantially. The medical device industry comes to mind, over the years where it is a place where if somebody came to me. And was like, I have, I found something and I wanna report it. I’d recommend they do it. And I would absolutely, you know, at this point in the federal government, if somebody came and found a sensitive thing, I would have no trouble pointing towards scissors, vulnerability, disclosure platform that they set up, which a lot of agencies use, or if an agency is running its own thing, I’d recommend they go in that way.
It is a time where I can recommend people do that. And.
Thomas: do you see the memo as encouraging, like you know, vulnerability researchers. who sometimes do this stuff for sport or for fun? Do you see this as Encouraging those people to start like, you know, aiming their scanners at do gov and looking to see what they can find? Like I get, there’s a, there’s an element of tolerance and of streamlining reports.
So they go to the right people and they don’t blow up. But is this something that you want to have happen? are you actively encouraging it with the memo?
Eric: It’s not just us, you know, obviously I just mentioned CISA’s vulnerability disclosure platform that they set up. there is an of encouragement by the government to, to want people to go and find these things because we want to know it’s better to know than to not know. So, absolutely. And I think, you know, one of the, though this is one of the smaller things in the memo, there is, there is also an element in there where we also really wanna make sure that, especially since there’s so much of an emphasis on relying on the cloud, which agencies are doing more of and taking advantage of the security features, you can sometimes only get in the commercial cloud.
What we wanted to, to also clear up is that, you know, just vulnerability disclosure, authorizations should be applicable. Even if there is a cloud provider under the hood. Right. And we, we wanna make that extra clear that just cuz an agency is moving to host their thing on a cloud provider that, that doesn’t get in the way from them feeling like they need to have ears to hear what people find on their systems.
Deirdre: So does that mean like you host your app on AWS and there might be a vuln disclosure program for AWS subsystems that you may use, but just because that exists, that does not preclude your app, your software that is running on AWS for also having your own vulner disclosure process, right?
Eric: Yeah. That’d be an example. And there, there are a lot of cloud providers out there, not just sort of the big ones that come to come to mind off hand, but right. But there’s a, there’s a long tail of that and, and they are all in very different places with how they do vulnerability disclosure.
David: Anecdotally you know, my experience with interacting with the federal government. We, we ran a scanning program out of the University of Michigan for many years. That was what most of my PhD was about. And in the very early days, even actually prior to when I was in the research group around 2012, when we were doing the internet wide scanning, DoD like bulk opted-out all of their IP ranges.
and they remained on our, you know, scanning deny list for years. And then when we launched Censys as like a public research project, some amount of time later, someone from DoD reached out and was like, why can’t I find any dot mil domains in Censys? And we were. here’s this email from 2012 where you told us to cut this crap out.
and they were like, "that’s interesting". And then some amount of time later, someone with a much fancier title from DoD contacted us and was like, please put us back on your list. We would like to be able to take advantage of all the things that, that you’re doing and, and get this information. So there definitely like anecdotally has been,an attitude change, even just within my limited interaction with the federal government.
I’m, you know, just to talk about the civilian side of my own experience at GSA, I, you know, had absolutely had the experience of making use of scan data, perhaps it was from some of the stuff you worked on and some also from some other sources and putting those to defensive use and sharing that information with civilian agencies so that they could do something about it.
Eric: And it, it was very clearly helpful to be able to take that outside perspective and move it inside, cuz it just doesn’t always happen organically.
Thomas: I have one more question on this vulnerability stuff, and then I’ll, I’ll let up on this. So there’s a big section in the middle of the memo about changing the way that you deliver the actual applications themselves. And you guys get into details about how you’re gonna do security assessment. You point out explicitly that like you can’t get outta these requirements just by running a scanner once.
Like where, you know, you’re actively interested in human expertise and you talk a lot about making that human expertise about making people that can do software security assessments, for instance, much more
Eric: I don’t think we talk about making people, but, or maybe the okay.
Thomas: more accessible, you know, resources for getting things tested. there’s, there’s a point at which you talk about bringing down the amount of time it takes to get a test scheduled from weeks to days, for instance, and about finding, I assume vendors to do, that with that’s a big question I have is there isn’t a huge pool of talent to do that kind of work in general.
It’s, it’s difficult for people in industry to get that work scheduled. Like people are, you know, scheduled way out into the future right now. how do you guys think about who the effective vendors are gonna be, who the effective suppliers of that expertise are gonna be? how are you gonna evaluate that is. there gonna be like a list of trusted vendors? Is that gonna be a program that you guys run.
you’re asking a whole ton of questions about procurement stuff before it happens. So I, I probably won’t be giving you as much detail as you might want here, but I think one thing we’re, we’ve, we’ve asked GSA and DHS to work together to figure out some of those things and to, and to try to advance this right.
Eric: So you’re absolutely describing why this stuff doesn’t happen organically. And that’s, that’s a problem we want to try and walk towards and not away from over the next couple of years is, is probably about the best way I could put it. But you’re also, you are absolutely correctly describing the kind of skills that we are looking for.
And that don’t, that just don’t always get applied consistently to some of the sensitive systems that we have, but they are applied relatively consistently by our adversaries.
David: So let’s, let’s pivot some to identity and devices, selfishly, because that’s always the part of, of zero trust that I personally find the most interesting. I think for the most part, people have agreed on the value of like single sign-on at this point. And like when people think zero trust or when companies market zero trust products, a lot of the time it centers around like the single sign-on and the reverse proxy to put in front of your app to do the single sign on.
And that’s a fairly well trodden path at this point, although individual instantiations of it can get. Arbitrarily complicated, depending on your setup. The part that I see less sort of consensus on is with devices, like in general, even if you have really good asset management and you know, what all of the computers that are in your employee fleet are, it’s not as clear how to integrate that into your authorization and authentication process, as it might be to say, you know, all of my employees have a PIV card cuz they’re in the government or, or, or some other form of security key, and they all have an account in some system.
And so, you know, I, I have phishing resistant MFA, but actually being able to say, they’re on a government device versus they’re on, you know, some other device I think is a lot harder. and I mean, you could debate the value of do they actually need to be on that device or not, but like I’m curious. if I were to pick the spot, that’s like the least trodden path and is going to probably require the most let’s call it engineering work in here.
It would be the goals underneath devices of integrating devices, into your authorization signal. And so I’m curious, what you think the first steps are there, or like maybe like concrete examples of what might satisfy the memo or if a government agency came to you and was like, what do we do for devices?
We understand identity, but we don’t know what to do for devices. What you would tell them.
Eric: And so you’re, you’re absolutely right. This is one of those things where we, I would say deliberately chose this because some of these, some of the, the things that are required to connect these device signals into your SSO or your enterprise identity system it’s about connecting pipelines, connecting systems that aren’t organically connected.
Like it’s, it’s not easy for enterprises to do that. They don’t do it organically, where they have to make that decision. And so we, you know, and we did not spell out how to do it and left a lot of flexibility, but you know, there are certainly things that you can see, you could see close parallels to some of these things here, right there, your device signal can be very flexible.
And I, I don’t want to accidentally constrain the thinking here of agencies and how they do this. Right. But, you know, even stuff like is this device patched, right? I mean, those are, those are things that. Are meaningful to an agency and to, and to its fleet. And, and in fact, you know, you can see that in some of these VPN based solutions, we’ve talked about where you can, you can watch the, the client app, try and, and make some, you know, do some ascertaining of the local environment.
And we’re talking about connecting information together in a, in a fancier way, but some of the concepts under the hood don’t have to be that sophisticated.
David: What, what about, EDR or endpoint detection in response? do you think that we’re, we should be, or that we are heading towards the world where people are running agents on government devices.
Thomas: actually, yeah, I was reading. that. It was tricky for me. Just reading that, to get a sense of whether EDR was mandated or not. there are parts of that where it reads like every system and, you know, all the agencies is gonna be EDR’d and then there are other parts where
it’s like, it might be flexible.
So, yeah, that was a big question I had too.
Eric: Mm-hmm . Yeah, that that’s fair. I mean, so there, there are a lot of agencies that run EDR already, in different ways. Right. And you know, that section of the zero trust memo links over to a whole dedicated memo on endpoint detection response that we also issued, to be responsive to the cybersecurity executive order, which calls for EDR throughout the government.
So there’s a whole bunch of other material. Honestly, most of the material about EDR in the government right now is actually not in the zero trust memo that we talk about it. Some there it’s, it’s important to understand that that that is a part of what we’re doing. But some of the stuff that dedicated memo talks about is are we talking about, you know, one vendor that everybody uses, are we talking about flexibility across agencies?
If so, how does that stuff interact? And you you’ll see that the memo sort of charts out a path where there is more flexibility and I’d encourage you to, to, to dig into it. But I mean, there absolutely is. I mean, there already are agencies with fairly large deployments of, of EDR of different tools. And the goal is to have there be a consistent baseline of sorts throughout the government to do that part of what we also wanna do acknowledge in the memorandum and the zero trust portion, especially since so much about zero trust is a least privileged conversation, right?
It’s about cons reducing attack surface and constraining impact, right? That we, we wanna acknowledge that devices or operating systems, other things that are so constrained by design, that they might not support the typical, big, heavy EDR. Presence, you know, that that’s not some, we’re not trying to discourage systemic architectures that focus on reducing attack surface.
That’s, that’s a recurring theme in a few areas that we wanted to make sure we reinforce there as well.
Deirdre: that’s nice. You could like have a fleet of Chromebooks or something like it, which are just like, fundamentally just, you can do less with them. It’s just a browser wrapped up with a kernel and then you’re done, something like that, that
Eric: That sort of thing, right? Yeah. Of having we’re not trying to rule out the concept of thin clients, right. Or constrained devices or other innovations that might come in the future that are designed to just, you know, focus down a particular device for a particular purpose for security reasons, among other things.
we, we want to be able to take advantage of that as well.
Deirdre: Such as like a capability based OS, which is like a fundamentally different model than a lot of the things we already have, including a Chromebook. It could be more powerful than a Chromebook, but fundamentally more constrained than your, like a Linux laptop or a Mac laptop.
Eric: right. And conversations about this about least privilege come up in all sorts of situations. Not, not just with EDR, but just how monitoring and visibility
David: other thing you might see about EDR and, and maybe this is getting more into the realm of the EDR recommendations that exist in other parts of the government. You could broadly classify them into two forms, there’s read-only ones. And then there’s ones that do what I think Thomas would refer to as RCE-as-a-service where, where they will run commands for you.
and, and both of these offer interesting security trade offs. And is that something that is one or the other, recommended or do we see that there’s a trend towards one or is this just the realm of somebody of a different agency’s memo and guidelines?
Eric: Lemme try to answer that with an analogy, because I think there’s a very similar conversation in sort of network monitoring. Right. And about how you, and, and also just where you put your monitoring tools in general, I’m sure this group has a lot of thoughts about break and inspect and, and network encryption.
And one of the things we spend a lot of time talking about is the need to wrestle with the, depending on what you’re doing and what architecture makes sense like to wrestle with the real pros and cons of doing that and about how visibility can be intention with a tax surface. And this, this, I think maybe even closer to the example you’re asking about, you know, when we talk about like how we monitor servers, Right.
Like you can monitor a server by having a daemon that runs on it with fairly expansive privileges so that it could do a bunch of fancy stuff and, and watch deeply into how your server’s performing. Or you can have that server be set up to push out information into another space and then something can be watching that space.
and you can, by having that level of indirection adds complexity gives you more interesting security properties to be able to essentially just reduce the impact of the thing that’s doing, the monitoring being compromised. So that at least the, the adversary can only do more monitoring, right. That what you’re talking about in EDR that, you know, the same considerations are present.
Eric: And I think our, our goal is really just to make agencies have that be a conscious thing that is reasoned about and, and, and done strategically rather than something that we just stumble into.
Thomas: like the elephant in the room on all this stuff is the VPN situation. Right. so you know, I think if you look at the story of VPNs in the industry leading up to a couple years ago, I, I think everyone would be jumping up and down applauding the, you know, the OMB memo for saying, you know, that that VPN strategy is bankrupt.
We can’t build security, you know, based on this kind of you know, crunchy outside, soft chewy, middle. Ever, right. there’s million different metaphors for this cause it’s been a problem for 15 years in the industry. VPNs have gotten a lot more interesting over the
last couple of years.
Right? One big question I have about this, is if you have a, like a hyper modern VPN situation where you get to do application by application, that’s linked into, you know, multifactor auth and, you know, you could potentially do device attestation into the VPN as well. why, so explicitly rule out VPNs as opposed to ruling out the architecture?
Eric: Really don’t think that this memo rules out the kind of architecture you’re talking about. There’s I don’t think an agency would be constrained from pursuing that. Right. In fact, when it comes to like how folks do, whether they choose to do network segmentation, whether they choose to do sort of identity as the boundary, anywhere in between some combination of them, that’s something where the agency needs to, you know, actually come back with a plan where they’re gonna do that.
The purpose is to credibly isolate your system so that there is not on whatever on whatever layer you’re doing. It. There’s not a hard outer shell and chewy center. Now there’s also, there are some things in there and specifically about, you know, making a system internet accessible that is not today by doing that, that may not be what an agency ends up doing for every system they have.
Right. But the capability of being able to do that and, and operating securely to government standards, the system is going to be very helpful for people. I really do. And I’m, I’m totally comfortable saying this publicly, right? It’s it’s, there’s all sorts of ways you can go about securing organizations and systems, but it’s very important not to over rely on any on, on something to the point that you have a false sense of security about what you’re doing, because, you know, it seems to make a lot of the problems in front of you go away.
And, and there is a degree to which network level security has played that role in the federal government and something where we want to challenge that to, to, to make sure that whatever state we arrive at. And I do think there will be a variety of states and we we’re explicitly litigators pick that flexibility that it is different from what’s there now, and that if somebody breaks into whatever kind of network or hybrid or, or, you know, whatever system is coming, that we have meaningfully constrained that impact.
And that we tore ourselves up inside to do that, which is probably what it takes almost no matter what approach you take.
Thomas: That’s very cool. it’s it’s my happiest possible reading of what you guys are going for with the OMB doc, right. It’s pretty subtle what you’re doing with the, the taking a host and making it internet accessible. Like the really blunt force read of that is where we’re going is every single system is gonna be internet exposed.
Like the Redis instance that sitting behind this Rails app will need to be Internet exposed and authenticated, which doesn’t make a whole lot of sense and is, is an easy pot shot to take at the document. But but, but you have to go through the exercise with something just so you have the you’ve exercised the muscle. this like. a, like cas sunseen nudge thing happening with that, where it’s you’re just getting people to think properly about how to do this.
I think if you look at like message board conversations about the OMB memo, if you had said modern VPNs are, you know, have identity based barriers, you could do device attestation, you could do fine grain authorization for applications. And like, the, the, the dumb message board response would’ve been, no, the OMB memo says you can’t use VPNs anymore.
And it’s pretty clear from how you respond to that. That that’s a misreading of the memo. Right. It’s a lot more subtle than that. Okay.
Eric: I mean, it is, it is more subtle than that. And to the, to the point that we talked about earlier, the government is, is quite a big place and it’s very difficult to make binary statements and to set binary rules for things. But it is also very important for, for something like this. There is a big shift, right?
So for example, there is another part of the document where we talk about reasoning through as I just mentioned on network visibility, the tension between attack surface and visibility. We specifically are not banning break and inspect or requiring break and inspect. Right. But because we are choosing to say something on it and to not say that it’s one thing or the other, like it’s, you know, sometimes people do read it as, you know, saying something black and white, where what it’s really trying to do is make sure that the conversation around it and the policy choices and the operational choices people make.
Are more well reasoned. And I realize that’s very hard to get across in, on the Internet in general and in really any particular document, but that, you know, that’s the mentality that we are trying to take with the, with some of these items.
Deirdre: One thing I wanted to touch on that’s related to like two things ago of like nudges is the, uh, emphasis on moving to non-phisable two factor for anyone that has to authenticate against, these services. And you explicit, you may answered this before, but you explicitly government people are more used to PIVs, like personal individual identit, verification credentials.
Like you stick a card into a thing and that’s how you, you actually have a key. I’m less familiar with this. Is that, is that about right to authenticate well itself?
Eric: so PIV itself, right, is a protocol and there’s also the card form factor and PIV. PIV is a few things. It’s the protocol it’s also, related to the, the sort of background investigation process that goes into ultimately issuing you that. Right. And so there’s, there’s this hard identity, a token that has a whole bunch of interesting cryptographic properties that also is tied to something that the government did that it cares about about you.
And so, yeah, the way that PIV the protocol has, you know, functions in practice is through mutual TLS, is through client, client certificates.
Deirdre: Hmm. We talked about those, you know, and one of the main things that we are emphasizing in this, that, that I’ve, I’ve talked about quite a bit is really recognizing that that approach is for one, it is phishing resistant.
Eric: is, and it has been for a long time, and it is a way in which the government was a little ahead of its time, but it also has problems. In a lot of environments and there is, there is an, there’s a bit of a brittleness to it that doesn’t survive in a lot of the environments that we work in. And so one of the things we’re really trying to do is there is when it comes to MFA, this is one of the more strict and specific parts of the memo is that we are really trying to, from a policy perspective, close a door on some older MFA methods in enterprise settings, not in public facing settings, but in enterprise settings.
But we are also really trying to explicitly open a door besides PIV protocols to do this authentication in the government, which is not something agencies have been told before.
Deirdre: I see. Yeah. And it explicitly says like we support things like WebAuthN , Fido2, which is if you use your, you know, Yubikey and a browser to log into Google or other, you know, I think there’s like a id dot gov that does this as well?
Eric: Do you mean login.gov?
Deirdre: Yeah, login.gov, which is cool. I logged into login.gov with my Yubikey, and then I never did anything with it. Again, it’s nice that it exists. but for like contractors and other things like that, who don’t have a PIV you can explicitly enforce a phishing resistant, two factor to support them logging into your application or something like that.
My question is, do you think that enforcing things like WebAuthN and explicitly discouraging more phishable forms of two factor like codes or that you have to input into your browser will be easier to roll out when you have a workforce that is generally they understand what PIV is, and you have to have a physical credential and you have to present that.
Do you think that’ll be easier to do because in the general user base of people who use Internet services, it is a like low, less than 1% adoption rate amongst, you know, huge web services that support things like, WebAuthN.
I think there, I could see it working out both ways in terms of, yes, people are more used to carrying tokens, but they’re also very used to like a specific way in which those work, which is the card format. But I think what you’re talking about here is the fact that there is a greater ability, like you could enforce this in the general public, right.
Eric: That’s right. But in an enterprise setting where we expect there to be the ability to, to purchase things for people, as you need to train them, there also is just, it’s a policy you need to do this. You need, you need to get, it’s not that, that it makes it easy cuz it isn’t. And there are absolutely all sorts of, you know, security and usability problems with, you know, like in the government, people have issues with their security team all the time.
when things get too tight. So I’m not saying it’s easy, but in the Internet enterprise setting, this. Seems like something that we can get done, particularly when we are, when we have P to start with, which is a, just a working workable option that is deployed widely in the government. And in a lot of ways in these enterprise settings, we’re talking about filling the gaps that P is, is not the right choice or cannot fill.
Eric: I also should mention, by the way, in this context is there’s part of the final memo here as well. There is some references to NIST and what NIST does with the derived PIV spec, which is the derived PIV being, how you sort of can create other tokens that are related and derived ultimately from your PIV card. And that has often been used in the context of being able to log into things with your phone, right.
Of like blessing your phone and cryptographically, but they anticipate, and, you know, says this in, in the memo, updating the derived PIV spec back to accommodate other kinds of authenticators like this. And so like over time, a lot of this stuff, you should expect it to become more harmonized.
David: Other question about the nitty gritty details of the memo. I noticed on the first page, there was a footnote that said agency is defined as in this other memo that was written. And I was curious if that’s in all government memos in the same way that like "the words must, should, shall, may are defined in RFC 2119" is at the beginning of every RFC. Is this, is this the, the government starter language.
Eric: mean, you absolutely will find people referring to other canonical definitions in law, circulars, other memos. That is a common feature of government documents like
David: Okay. So we’ve been all over in terms of topics within the memo. Maybe we’ve been, we’ve, we’ve been a little combative, unintentionally. I wanna be clear that I think this memo is like really great and I think this is really positive direction to be going in. And I think a lot of other people think that as well.
And so for people that wanna be a part of this can you pitch them on working for the government or where in the government they should go to work if they wanna work on this type of thing, and, and like how to get started in that if, if someone wants to do technology work in the government,
Eric: Yeah, absolutely. And look for one. I don’t feel this has been combative at all. I really, I really appreciate actually, you digging into this memo. For one, a lot of folks out in sort of the, you know, hard tech and security space do not read government PDFs about cybersecurity policies or say the word cybersecurity at all.
So I really appreciate that you did that and invited me on to talk about it. And in general, there are many places where one can come into government as an engineer, as a security professional, as a technologist, as a designer, as a user experience person, as a content writer, anything that you, you know, where you just have a craft of wanting to have things.
Be delivered better for people. There are at this point, a very healthy number of places around the government to go that are ready to help you do good work, and give you a place where like your technology is good, or at least is reasonable. And where, you know, where you’re around. Other people like yourself who have, you know, an understanding of how to connect, where technology and to sort of design and, and development practices are going and bring those and put them to bear in a public service context.
And you know, that wasn’t as much the case a decade ago, but a lot has happened in 10 years. And I came into the government through a team called 18F, one eight F, that started up in 2014, started up right around the same time as the US Digital Service started up. They’re a little bit like sister organizations,lots of cross-pollination and collaboration between them. US Digital Service is actually a neighbor of us in OMB and this sort of, you know, engages on a whole bunch of different initiatives for a whole bunch of different reasons. And then 18F is a consultancy that operates out of the general services administration that ultimately led to a bunch of interesting services. You mentioned login.gov earlier, which was a collaboration between 18F and USDS some years ago, which has matured over time, has, you know, tens of millions of user accounts doing, WebAuthN and MFA and, you know, doing cool stuff.
And I really, you know, one of the experiences for me that was really profound cause I was somebody who found the government. It just, I just assumed it was stultifyingly boring inside. And when I came in and also the, you know, I never worked for an organization large than 50 people. So also the idea of bureaucracy in and of itself no matter where I worked was also very intimidating.
And one of the things that was really affecting was. For one, there’s a much bigger community feel than you would expect. Once you’re inside the government, there are communities of practice that span across all these different agencies that operate very fluidly. There’s actually a surprising amount of trust at the, at the working level, and all sorts of interesting collaborations like emerge from that organically.
Eric: It’s not all top down at all. And I know my, my experience there, not just on the projects that I saw turn into things like login.gov, or the US web design system or all these other interesting things I found, not just me, but many people around me were able to actually work on an influence policy. And just also gaining an understanding for what the government is on the inside, which is of course, like just a bunch of people, doing stuff and trying to do the best they can in, in a, in a bunch of different, interesting situations. You know, that was the, a lot of that was surprising to me and something I really come to feel like government is, is sort of, you know, the default thing for me now.
And I definitely would not have expected that at all, but it doesn’t even matter whether you it’ll become something that you want to spend tons and tons of time in really just any kind of experience you get in the government. All I think is a profoundly eyeopening thing helps you not only read the news better, but helps you reason about how organizations function about how human incentives work and just about like how complex every single thing is under the hood and especially an appreciation for what it means when you’re not allowed to say no to a class of users.
That is one of the things that makes the government profoundly different than anything out in the private sector is you don’t get to pick and choose what segment of the market you go after in a lot of cases. In almost every case you are, if you’re doing something interesting in the federal government that you know about today, it’s because it serves the general public and you have a particular obligation to make sure that you do that.
Eric: And obviously there are all sorts of policy things that get tied in, like people have expectations that the government will be excellent on privacy, excellent on civil liberties, excellent on security, excellent on usability. And also assume that these are big powerful organizations that have everything tied up nicely in a bow and run just like a well-oiled ship all the time, despite all evidence that you could see that it, you know, this stuff is hard.
So I really do encourage folks to get some of that experience. And there’s a lot of people that have done that over the last decade, whether for six months, two years or the whole time. And it is, is a really positive experience. And almost nobody in that is doing technology in the government that, that I have come across, wears a suit and tie, like I happen to be doing at this moment, do not be fooled.
David: where should, so GSA, 18F good places to actually apply. if you’re just looking to do technology in government, like where should you be
Eric: So, so, so look at GSA, you know, if you go to join.tts.gsa.gov, that’s the technology wing of GSA that do fantastic work. USDS is hiring. Our office hires as well. And we, you know, we, and a lot of other places also just publish stuff on USA jobs, which is the government’s hiring place. Some of those postings will look very intimidating and government-y and sometimes you might need somebody to explain to you, what is, what does that mean? But that a lot of really great, interesting jobs get posted there. And I would also say at this point too, like if you look around, I, I bet you, somebody, you know, in your circles, in the technology world, that’s done work in or around the government. You probably don’t have to make too many hops to find somebody who can, you know, point you to something and help, help explain what’s going on. I guess I’ll also, I will make a brief plug for the Consumer Financial Protection Bureau in part, because they started a decade ago and actually really prioritized technology upfront in a way that was very impressive. And, you know, people don’t always expect regulators to, to be prioritizing that in the same way.
And, and they’re a good living example that you can, so those are, those are a few places, but I there’s good work all over. Awesome.
David: Awesome. Thank you for coming on our silly little podcast, to all the listeners out there, we encourage you to both go to USA jobs and find a way to work directly for Eric. and, and then once you finish doing that, you’re gonna wanna go to merch.securitycryptographywhatever.com and buy a mug.
Deirdre: or a t-shirt or a sticker with your hard earned government money.
Thomas: Eric. Thank you so much for doing this. This is super interesting.
Deirdre: yes. Thank you so much.
Eric: thanks so much for having me. This is great.