yojimbo

Thoughts that should have a longer lifetime than my Mastodon posts ...

Purplecon 2019 brendan shaklovitz, face your fearful foes to dodge a dark and dreary phishy fate, https://purplecon.nz/talks#brendan-shaklovitz, https://www.youtube.com/watch?v=yNEnlcTgfnQ&list=PLS45xFo74VF546tbfXXtKDO03cVrAalM6&index=11&t=0s

after a short stint with the malicious masterminds of our red team, i've seen the terrifying tactics that real attackers could use against you. it's dirty, underhanded, and quite brilliant, and it's only fair that we level the playing field a bit by sharing some of our secrets. in this talk we'll skip past basic tech-support scams and talk about lovingly hand-crafted “spear phishing” campaigns specifically targeting individuals based on publicly available information. who knew your gaming habits would be your downfall? and finally we'll talk about some things you can do to really ruin a fledgeling evil mastermind's day, and repurposing some strategies learned from a career in site reliability engineering to help create a psychologically safe environment where people aren't afraid to tell you when they make mistakes.

  • (Atlassian SREs rotate into red/blue teams)
  • Recon via social (LI, photos) can reveal a lot more than you thought, including hw/sw
  • osquery is good at endpoint monitoring
  • build a security culture
    • give rewards – even just stickers, but also internal store 'credits' if applicable, etc

Purplecon 2019 tom eastman, protecting people from social media harassment, https://purplecon.nz/talks#tom-eastman, https://www.youtube.com/watch?v=b1bTKHdvGjo&list=PLS45xFo74VF546tbfXXtKDO03cVrAalM6&index=16&t=0s

in some ways, twitter seems like it was designed from the ground up to be the perfect tool for harassment. twitter’s own mechanisms that are supposed to protect users sometimes seem to be pretty inadequate to the task. so i decided to make a few of my own. along the way, i got to grapple with some interesting challenges, including and especially how to build a tool safe enough for use by people who have been threatened online. in this talk i explore risks you have to consider, how you mitigate them, and the ethics of the decisions you end up making.

  • Tom wrote Secateur, which tries to restrict dogpiling on twitter by blocking a blocked users' followers; but only for a period of time
    • Therefore the app has to hold an OAuth token on behalf of its user
    • Therefore it must be open source and able to be run by the user, because why should they trust Tom while they're being attacked?
    • threat model the app server assuming the dogpilers will attack it as well

Purplecon 2019 helen, an introduction to ghidra, https://purplecon.nz/talks#helen

so the nsa made its internal reverse engineering toolkit open source in early 2019, which means everyone now has access to a thing for free. sure... it has dark mode. a five minute overview on getting started for the overwhelmed and/or the lazy.

  • Shared projects are hosted on a server (there are public non-NSA ones) to share a job between multiple researchers
    • Decompiler + function graphs looks like IDA
    • Has an API for automating goodness
    • Had no backdoors, guaranteed!

Purplecon 2019 kirk, incident response drills: how to play games and get good, https://purplecon.nz/talks#kirk, https://www.youtube.com/watch?v=-8BeooqxuAo&list=PLS45xFo74VF546tbfXXtKDO03cVrAalM6&index=7&t=0s

computers are exceptionally good at taking instructions and making very fast, very precise mistakes very reliably. humans are conceptually similar but interpret their inputs and decide on courses of action based on experience. preperation and rehearsal for messy, no-notice events that are definitely (hopefuly) not business as usual makes us more chill for when something (production) does go down due to novel (gremlins) issues. incident responders should practice for sensitive and time-critical events before they happen so they are able to return things to a safe and stable state with grace and aplomb. this talk is for team leaders or security program owners interested in the craft of using incident response exercises to develop their people. we will learn how these synthetic experiences can be devised against specific environments and standards with measurable outcomes. finally we will cover ways to easily scale difficulty and iteratively improve your exercise program.

  • How to get from a bad place to a good place
    • think about it in advance, perhaps?
    • like fire drills, earthquake drills etc
  • practice, because real events add real stress on top of the job
  • Check Maslow's learning hierarchy
  • Telling war stories explicitly passes knowledge on to juniors
  • mentoring/coaching is a better structure though
  • making exercises “engaging” helps
    • “Zombie Preparedness” programme worked
  • Being an RPG DM is great preparation
    • qv Cathy/TradeMe's similar RPG talk

Purplecon 2019 mikala easte, risk management without slowing down, https://purplecon.nz/talks#mikala-easte, https://www.youtube.com/watch?v=2S6acN_QY_Y&list=PLS45xFo74VF546tbfXXtKDO03cVrAalM6&index=13&t=0s

most organisations start out relying on people and their expertise when making decisions, but this doesn't scale well and leads to bottlenecks and pain. larger corporates rely on processes, controls and systems, but these can overwhelm smaller companies. i'd like to share some thoughts on how to set up lightweight risk management processes to empower teams to make informed decisions and not just rely on what the security person thinks of it.

  • halfassing the job is better than not doing it
  • just keep a risk register, write everything in it
  • review it – even accepted risks, because the world outside might have changed
  • involve multiple people in the risk-setting process

Purplecon 2019 ben dechrai, to identity and beyond!, https://purplecon.nz/talks#ben-dechrai, https://www.youtube.com/watch?v=_5FT5DAMVY4&list=PLS45xFo74VF546tbfXXtKDO03cVrAalM6&index=8&t=0s

it's unusual to develop applications that have no identity requirements nowadays. whether it's securing access to resources, synchronising data between devices, or providing a customised experience, any new project will soon need that login form. while you might start out with a simple login form and a backend user directory, these soon grow into their own beasts, when requirements call for multi-factor authentication, or machine-to-machine authorisation functionality. these requirements and associated maintenance costs are often at odds with the desire to focus on building new features that actually bring your users value, or fixing bugs that currently bring them pain. in this talk, you will learn about oauth, openid connect, and json web tokens; where they came from, how they work, and how they can simplify your projects, from single-page apps to the apis that drive them, and everything in between.

  • Auth0 – use OpenIDConnect!
  • (what about webauthn?)

Purplecon 2019 bl3ep, a novice red teamer's guide to self help, https://purplecon.nz/talks#bl3ep, not streamed

advice and learnings from a newbie's first year: how to get better hacking yourself, hacking others, and defence against the se arts.

  • mh – stress? say “I'm excited” and hack your own response system
  • mh – visualise the stressful activity in details first, going well
  • social engineering attacks are a good exercise for general non-IT staff
    • Validate the request. Call back. Don't “reply”. Don't leak PII by being helpful. Use a different channel for validation

Purplecon 2019 james, deploying kubernetes safer(ish), https://purplecon.nz/talks#james, https://www.youtube.com/watch?v=1aBIFsqpBVU&list=PLS45xFo74VF546tbfXXtKDO03cVrAalM6&index=12&t=0s

sometimes evil conglomerates, large companies and/or totally regular and normal individuals prefer to run kubernetes themselves, instead of using a public cloud provider – perhaps they don't trust the intergoogles, perhaps they want to experience the incessant joys of maintenance and upgrades themselves, or perhaps (the real reason) they wanted to justify their sweet, sweet devops stickers on their laptop. sure, not trusting someone else's computer make sense in some threat models, the (sometimes overly-enthusiastic) diy approach does mean they open themselves up to a whole host of other problems – google probably does know how to deploy, manage and secure kubernetes better than anyone else, since they kinda built it. they've probably even got better stickers. unfortunately, setting it up is hard. there's so many moving parts and the vaguely dodgy how-to posts on random blogs always seem to be a few versions behind – and they feel like they get away with it by saying “definitely probably don't do this in production, but it's totally fine to do for testing, what's the worst that could happen?*” this talk will take you through some of the parts of the kubernetes setup that are commonly ignored (“oh yeah we’ll definitely $100% get to that later”), or excluded from scripts you piped from curl to bash, or are pretty easy to accidentally get wrong if you didn’t know about this other thing that wasn’t made immediately obvious. if you’re an auditor, these are your super tasty critical severity fairy-bread tickets. if you’re a defender, these are the things that differentiate your totally awesome cluster of orchestrated hotness from a totally awesome cluster of orchestrated hot mess. if you’re an attacker who’s popped a shell and found themselves trapped in a container of emotions, these are the things that make you have a big sad when they’re done right.

  • How did you install it? (wget | sudo bash ???)
  • localhost api bypasses all auth* checks by default
    • Don't let containers talk to your localhost api!
  • did you give your container a token to talk to k8s? how is that secure??
  • etcd is supposed to be on multiple nodes, if you don't bother it can be DoS'd
  • etcd setups rarely use auth* but should

Purplecon 2019 moss, choose your own adventure: password reset, https://purplecon.nz/talks#moss, https://www.youtube.com/watch?v=-gpfKW_8EJw&list=PLS45xFo74VF546tbfXXtKDO03cVrAalM6&index=5

you build or are part of a team that has a thing on the web that does stuff for people. and those people would appreciate it if other people couldn't pretend to be them on your website doing their secret squirrel stuff. so, you decide to have people login in with a password. it'd be mighty nice of you to give people a way to recover their accounts when they misplace their passwords. password reset flows are a choose your own adventure where the players just want to be able to secret squirrel again, and if you're in charge of one let's learn about some game overs everyone would like to avoid.

  • Lifecycle of password reset, from the perspective of the reset token, might reveal new ways to think
  • To understand lifecycle, expiry, multiple requests, multiple reuse (especially the corporate email gateway that clicks on all links?), success and failure states.
  • But don't let attackers discover your PRNG seed
  • May be a good place to use a state machine :–)

Purplecon 2019 anton black, against lies, h*cking lies, https://purplecon.nz/talks#anton-black, https://www.youtube.com/watch?v=3FLwN7OJAjQ&list=PLS45xFo74VF546tbfXXtKDO03cVrAalM6&index=8

did you know that the more blue teamers are sent to handle a security incident, the worse that incident will be? using science and statistics to make decisions about how you run security is a great idea – 𝘪𝘧 you can interpret and represent your data accurately. but statistics is rife with potential pitfalls that can lead you to all kinds of false conclusions. with some help from planet earth's own blue team, we'll learn how to recognize and work around these problems to not only use your own data for good, but to also catch flawed analyses when you see them around you.

  • Identify confounding variables/assumptions
  • Observational studies can't identify causes, and “all” infosec studies are observational (therefore incomplete science)
  • Interventional studies are too expensive, or ethically questionable