Presented at O’Reilly OSCON.
Two years ago, my colleague Craig Jackson and I were making our way home from a conference. For the second year in a row, we’d presented a workshop on building cybersecurity programs for scientific research projects with an emphasis on large facilities, and that year I presented a little addendum, Securing Novel Technologies. The thing about science is sometimes there is no guide to best practices. We’re often asked to secure things that don’t exist anywhere else. My addendum offered a glimpse at where I come up with controls for the weird stuff.
I’m that hacker. Giant telescope on top of a volcano? No best practices guide for that. Just send Susan; she’ll figure out how to secure it. SCADA under the Antarctic ice? Got it. The military called, but they can’t tell us what they called about just yet. Something’s trying to blow up the internet, and we need a strategy…
We have lots of people out there in the field following best practices guides and using controls from big lists. What we lack is enough security operatives who think and work around the edges of what we don’t understand well, either because it’s too new, because it’s too unusual, or because you just plain aren’t familiar with it and memorizing every piece of tech in the world is impossible.
The experienced among us reason from first principles, but we tend not to teach that way…until now. Using the seven information security practice principles developed with my team at IU CACR, I’ll introduce a mental model for reasoning about security instead of trying to memorize for security and demonstrate its application to real-world examples. You’ll leave looking at the technologies and human systems around you a little differently.