> Web programmer? Don’t make account IDs easy to guess. In an otherwise secure system, account numbers that go 1,2,3… shouldn’t be a problem, but why make it easy?
Is that the best advice to web programmers they can give based on this story? That's the "obscurity" part in the security by obscurity scheme. If you've got your security otherwise nailed down fine, some obscurity on the top doesn't hurt: security-in-depth, people seem to call that. But use only the obscurity, and only one person has to find out how your scheme works, and it's game over.
I'd, you know, recommend to think about authentication. Your authentication state is not "logged in", it's "logged in as user X". So the code that decides whether a client can see a specific page can and should (!) depend on what specifically you're authenticated as.
Oh, and yes, this company has proved that they don't know the least thing about security. But that was clear already.
> Your authentication state is not "logged in", it's "logged in as user X". So the code that decides whether a client can see a specific page can and should (!) depend on what specifically you're authenticated as.
"the code that decides whether a client can see a specific page" should not care about authentication, this is authorization issue. I see these things conflated too often.
Identification, Authentication and Authorization are three different beasts. Separate identification seems unnecessary complication, which it is in simple web app, but in a more complicated case where ID is not user supplied (login over external service, read ID from smart card, etc.) it can become a necessity, which can be embedded into authentication mechanism. Authentication mechanism should only provide "authenticated" and similar (e.g. "authentication security level") flags, because authentication at its root is a mechanism to establish trust that client has control of certain ID, nothing more.
Have you looked at recent at auth tutorials for NodeJS and related frameworks?
You (a potential beginner) have to basically implement everything yourself or use some 3rd party service. There just aren't any good libraries available.
I once tried some random framework I found where the example auth project that they provided had exactly the problem you described.
You could log in and then do everything if you called the api endpoints manually... Deleting and editing other users profiles? No problem.
Of course your authentication scheme should be secure even if someone knows how the user IDs are generated. With that said it might be a good idea to exposing something like GUIDs to the user anyways, for example if you are worried about a competitor getting an exact count of registered users / products / orders (which the competitor would get by themselves doing the action and observing the created ID).
“Security by obscurity” has become so misused... it doesn’t apply to this situation at all. There’s nothing illegitimate about having a secret token for your user ID. By the same logic, a password is false security.
“Security by obscurity” refers to the obscurity of the security scheme, not identifiers!
> Tapplock user? Get and install any and all patches provided. Apparently, the company has now addressed the most obvious web portal holes (guessable account IDs and no HTTPS), but we assume an app update will be needed as well.
It sounds like they still have this flaw, you just have to guess someone else's account ID now.
In a different market, with a different product, this could have been a funny success story 10 years later.
Something like:
> "when we launched, our [thing] was totally insecure and we had just thrown together a bunch of spaghetti code over nights and weekends- anything that would ship. Then when we hit it big we started investing in the process and now our [thing] is the best, most secure one on the market"
Too bad in this case [thing] is a lock, where proper security is it's primary, and single reason for existing. There's no 10 years later for this one.
While true in many cases, in my experience it's not only the engineers. Unless you have the senior managers on board as well, they'll just lean on the engineers to "ship now".
While EEs are part to blame, the other half is just graduated (or "bootcamp graduated" - let's put it this way) developers that know all the node.js shortcuts but don't know about the fundamentals of internet security.
But you probably knew that is was something to consider. And weren't told that you are basically a full fledged developer after a 3 month web dev course and after reading "cracking the coding interview". The Dunning–Kruger effect is heavily at work there.
I'm not the person you responded to, but not really. My college courses - admittedly as a Computer Science major, not Software Engineering - were laughably lacking any practical education. The closest we had was one group project, and we didn't even learn about things like version control.
Security? Forget it.
I've never gone to a bootcamp, but I'd wager that they at least mention security in their classes. College was the equivalent of the PHP documentation that had incredibly insecure examples that people "learned" from.
That’s wrong. An engineer has a couple managers above. Product, project, platform manager. You name it. Engineer is just another blue collar worker nowadays. No decision making power. I was not allowed to save company $40k, they told that numbers shouldn’t interest me. If management says, we need no encryption and authentifition, it’s totally ok. I just print their emails with this statement for later.
Engineering is absolutely not blue collar, but this has nothing to do with the issue you’re facing. If you found an opportunity to save $40k and they said you shouldn’t care about numbers, then either you have poor management or there’s more to the story that you didn’t share. At the very least competent management should have been able to explain why saving the $40k was the wrong trade off.
Yes, competent management. Didn’t experienced much companies having it. With competent management I could save my salary for couple years in this project alone. The client ist other branch of the same company. I could design a pcb with components for our needs, instead of that management wants to go with expensive 3rd party module having all bells and whistles on it.
Edit: pcb design is risky. But the system is very primitive having voltage regulator and single integrated circuit in it. I doubt this department with current performance would survive as independent company.
Maybe in garage projects, but there it can as well be in the opposite way. Would be really surprised to see it mixed in a corporate setup or with even smaller design contractors.
Out of interest what strategies could be implemented to avoid this?
I've worked as a developer for a number of companies who handle sensitive data and I could have fairly easily have pushed malicious code. Even with mandatory code reviews, significantly complicated code with a well placed security hole would likely be missed.
I work on encrypting movies for distribution, so there's a spectrum of attacks we try to consider.
1. Beginner opening Firebug / Devtools - use UUIDs for all the things, don't make anything guessable. Scrypt/bcrypt passwords on a separate Oauth system for all passwords / logins, manage all sessions with access tokens that are checked on each operation, allow immediate revocation of all open sessions.
2. Novice / Amateur attacks on the API or backend - make sure as much security as possible is in global middleware on the application, on by default, opt-out explicit. Long random access tokens, use either DB checks or constant time comparison to verify, do rounds of self and third party testing to make sure there's no cross privileges or escalation.
3. Protect against ourselves going rogue / being compromised - all keys in a HSM, the HSM also has the list of certificate chains it will trust, only modifiable by N of M keycards held by top execs, assume compromise at every stage and design the systems to mitigate impact. Ideally design so catastrophic attacks are impossible without collusion among top level company execs, and all other attacks, even when successful are downgraded to inconveniences, preferably minor.
> 1. Beginner opening Firebug / Devtools - use UUIDs for all the things, don't make anything guessable.
Is that actually necessary/useful?
> Scrypt/bcrypt passwords on a separate Oauth system for all passwords / logins, manage all sessions with access tokens that are checked on each operation, allow immediate revocation of all open sessions.
why Oauth instead of just a regular user table, login POST form, and the rest as you describe?
In this case oAuth is more because we have multiple products, might have done without it for just one, but it somehow seems to make more sense to have a separate system with isolated human data anyway, what with GDPR and all. The actual application has only the user UUIDs, can null or pseudoanonymise the references as necessary. This is B2B so slightly different rules apply for auditing etc.
And yeah, obscurity is not security but is useful in itself. Smaller attack vector (can’t leak info by iteration) less competitive info leakage (you’ll never figure out how many customers we have unless we tell you). Also has advantages with regards to database replication, backup restore and sequence management, but that’s another topic.
They should have paid for code audits, penetration testing, and the whole nine yards before putting anything in a customer's hand. The whole point of this company is based around a physical security device with online controls, to not do any of that is a downright folly in my opinion.
I am fairly certain that the lean methodology of "if you are not embarrassed by your first release, you released too late" may have been a factor.
Being a Kickstarter project, the pressure to meet production deadlines is severe, and any technical debt inherent since the 'quick and dirty' MVP probably stays unpaid.
Silicon Valley prefers teams to have epic backstories, not experience.
A first-year dropout from MIT and a Thiel fellow are much more likely to get funded than an engineer with a state uni BSc+25 years industry experience plus a former sales manager in the same industry.
I'm just guessing, but I think experience would actually get you more investment. But interesting backstory gets you more press. Thus the impression that backstory is more important. Most funded startup founders are older and experienced. They just don't get as much tech press coverage.
Kind of like why many people fear sending their kids to school because they might be killed by a mass shooter. When in reality, the drive to school is much more dangerous.
This company is from Canada, not Silicon Valley, and has nothing to do with MIT or the Thiel fellowship. Please avoid polluting comment threads with this kind of off-topic sneering.
I wonder why the "message" field in the response says "API调用成功" ("API call succeeded", I think?) if this is a Canadian company. Did they just buy the locking solution from some Chinese OEM?
Yes, almost certainly. One of the IoT industry's dirty little secrets is that just about everyone is just rebadging OEM hardware from China, and often doing a minimum of due diligence on that hardware.
(Disclaimer: I work for an IoT startup. We have an in-house security engineer, and contract pen testers who we call in to do physical and software tests against any new hardware we ship)
Some of the off-brand OEM locks are worse. Like, there's one model which apparently just has exposed Torx screws that can be undone to take the entire lock apart: https://www.youtube.com/watch?v=7Uje4pxfSlI
This is a problem you see with a model of security where they have security on the front end (meaning the user can only see the bits they should have access to in the UI) but then the back end API is pretty much open to any authenticated user. The idea being that nobody should be able to send API requests if the UI isn't there.
It is a stupid practice.
I "hacked" a student newspaper back when I was at university with a similar "hack". They decided to roll their own CMS rather than using something like Wordpress, because, you know... that makes sense for a small team with little experience.
The user settings page was something like /user/edit/{userid}. I noticed that you can actually change any user's settings (including login) by just changing the userid. So, of course, you just change it to 1 because the first user will inevitably be the admin. This gave control over the whole system.
>Tapplock user? Get and install any and all patches provided. Apparently, the company has now addressed the most obvious web portal holes (guessable account IDs and no HTTPS), but we assume an app update will be needed as well.
I thought this was a blog post about the guy who simply unlocked the padlock using a GoPro mount (https://www.youtube.com/watch?v=RxM55DNS9CE - the video is worth watching from the beginning) but this was more amusing than I was expecting.
Apparently that was a quality-control issue with his lock - the lock is designed to have a small metal pin that prevents rotating the back, but it was defective on his lock.
Same quality control that allows the shackle to be snipped with a 12" set of bolt cutters? It's almost as if at every turn, of two possible decisions, they consistently chose the wrong one.
This may seem unrelated but I watched "The Disaster Artist" for the first time last night. It made me cringe, not for Wiseau but by reminding me of all the times I've been a Wiseau in my life as I disconnected from reality caught up in some fantasy of how I was going to make the world love me by something I was going to do. Reality can be a brutal place for the ego, but at least it's real.
"I'm going to make my own Bluetooth smart-lock. It's gonna be amaaaazing. Oh hai Mark."
> You could easily sniff out account IDs because Tapplock was too lazy to use HTTPS.
SSL benefits are generally over-hyped IMO and might give a false sense of being 'Secure' as in this article where such an obviously flawed system receives "use SSL" as one of two recommendations.
The idea that unencrypted traffic allows any hacker to easily sniff it is wrong and misleading. The eavesdropper needs to be "close": In the same LAN as the target, or upstream of it, i.e on the same wifi (needs to be physically there, know/hack the wifi password and performing an ARP spoofing attack), or being/hacking the ISP itself.
Of course I'm not saying SSL shouldn't be used, only that it's a secondary security measure, like using a seat-belts vs having good breaks.
Almost all flaws are "not that serious" on their own, because people aren't generally _that_ dumb.
"You can find out somebody's account ID" isn't that big a problem in the presence of other decent mitigations. Without those mitigations, of which HTTPS is one to prevent request spoofing, everything is terrible.
> ”Incredibly, Tapplock’s back-end system would not only let him open other people’s locks using the official app, but also tell him where to find the locks he could now open!”
Never heard of this product before but what a hilarious read. They seem to fix some of the issues pretty quick. But what a nightmare IoT are. I’m stressed out by not keeping up to date with computer/phone updates (mostly because I wait a bit to ensure programs I use still work). Can’t imagine owning even more products that I have to maintain software updates on...
The article recommends:
> Don’t allow plain HTTP any more. Make sure your servers insist upon HTTPS connections, and update your client software to use HTTPS exclusively.
Does this mean I should turn off HTTP completely? Right now, I redirect any incoming HTTP requests to HTTPS. Is this considered insecure?
So it depends on your client population. If you've got general browsers hitting your site, I think HTTP-->HTTPS is a reasonable trade off and stops the poor experience of a user putting your site name into the browser bar and getting told there's no site there.
If you control the client population, as in this case, there's no real need to allow HTTP at all, so I'd just disable it and have HTTPS only.
Using HTTP to redirect to HTTPS is fine, but you should augment it with HSTS (to instruct browsers to always go to HTTPS) and HSTS preload (to let browsers know to go for HTTPS even before they visit your site the first time).
It's insecure because whoever intercepts the unencrypted HTTP request could redirect it anywhere, i.e. to their evil site instead of your secure site.
But if you set the Strict-Transport-Security header[1] that will tell the client to go directly to HTTPS in the future, without needing to be redirected, so future requests will be secure as long as the initial redirect request is not intercepted.
Hmm? What could they intercept if you only have HTTPS (with properly checked valid certificates) other than the (useless to the attacker) encrypted data?
Preventing that kind of thing is the whole point of SSL. If you enable communication without SSL it's not prevented, so it's less secure.
I'm saying that if the client tries to connect to HTTP/80 and the attacker has interception abilities, it doesn't matter if the legitimate server has that port closed, because the request will never get there.
Ah. Sure. But if the server doesn't listen on port 80 and you document this fact, clients won't regularly attempt to connect on port 80 (since they know it can't work).
I guess this applies more to non-browser client software than to people typing URLs in their browser address field, though.
If it's an API, disabling HTTP reduces the risk someone will mistakenly code a client that uses HTTP, not notice their mistake, and later users of that client will get MITMed.
It's unlikely this will become standard practice in the near future.
The reason is, if you're using any type of hosting that shares IP addresses - Cloudflare, S3, Cloudfront - then you can't close the HTTP port as it's needed for other sites hosted on the same IP.
Getting your domain added to the HSTS preload list is a more conventional way of reducing HTTP traffic.
Utterly horrifying, but ultimately irrelevant. No lock of this size is meant to be anything other than inconvenient to open. Angle grinders are cheap, and more easily wielded than HTTP request crafting.
Also, with a simple padlock you know that it can be cut open pretty easily. So you're aware of the risks and the basic level of security it offers. This "smart" padlock offers nowhere near the basic level of security you expect so you can't make a proper risk assessment.
We had a similar thing a couple of years ago with bicycle locks (I'm Dutch). AXA, a well known manufacturer of locks offered a range of very sturdy bicycle locks. Hardened steel, almost impossible to crack open unless you spend some time with a big, noisy angle grinder. A lot of these locks were mandated by insurance companies, so they must be good, right?
Wrong. A lot of these locks could be opened in seconds with just a blank (uncut) key. Which you could buy in bulk and not very hard to get hold of...
The issue isn't that you could break most any padlock using heavy duty tools, the issue is that you can break any Tapplock without arousing suspicion.
Consider using it for your gym-locker. The changing room is typically occupied by at least some people whenever the gym is open so if someone is pulling out a bolt-cutter or an angle grinder it would probably lead to someone notifying the gym-employees/police. Meanwhile someone who just walks up to the lock and seemingly opens it must be legit and goes unnoticed.
If someone wants to get past my low-tech padlock, it will be inconvenient. They will have to spend some amount of time with either an angle grinder, bolt cutters, or lock picking tools. What they’re doing will be suspicious and will likely draw attention. If they’re caught during the trip to or away from my lock, the possession of any of these tools will be suspicious.
With this lock, all those weaknesses are the same. Expect now the lock-pick tools are replaced with an app, and there's nothing wonky at all about someone walking up to the lock, tapping on their phone, and removing it. If they're caught en route to or away from the location, they have nothing on them to incriminate.
This lock is strictly worse than a standard padlock, when it comes to security. All the flaws plus new ones.
Why the hell do they store the location data of the locks? Are they secretly selling the location and unlock access to governments? I can't imagine a legitimate usecase because this is not a bike lock, it's not supposed to be mobile.
Frankly, this all seems like quite a lock of fuss over nothing. Firstly, pad locks are generally very easy to just break open with a set of bolt cutters. Particularly, if you look at more secure traditional pad locks practically all of them share physical design features intended at minimizing the amount of the actual shackle that's accessible. So to be clear- the second you see the shape of that padlock you know that it's not designed to be super secure.
With that in mind - which are you more worried about? Something spoofing your Bluetooth pass code using some advanced tech, physically unscrewing the back and deconstructing the padlock, or the third option: chop open the shackle?
What I find amazing is that they thought they advertise this product as more secure than any other padlock with the same mechanism. This padlock is a finger print padlock, maybe people like that convenience, but don't try and pretend physical security isn't a concern.
If someone chops open your padlock you find out about it next time you see it.
The vulnerabilities in the article might go unnoticed, so the user keeps locking gates, chains, whatever with the bad lock and the crooks can come again and again and rifle through your shed, garage, whatever.
Most locks are in public places so the threat from a way that doesn’t draw suspicion is much higher than one that requires you to carry a huge set of bolt cutters in to do the job.
Also with bolt cutters that leaves a mark. With a high tech back it doesn’t (unless they record and you actively look at unlock attempts). Which means that to find out if your lock has been hacked you’ll likely have to physically go there, open it up, and then look inside to see if anything is missing.
> With that in mind - which are you more worried about? Something spoofing your Bluetooth pass code using some advanced tech, physically unscrewing the back and deconstructing the padlock, or the third option: chop open the shackle?
Are you being obtuse? I'm worried about the options that are 1) easiest to perform and 2) arouse the least suspicion.
So, obviously the first two by a mile. The tech is not advanced.
Walking around lugging bolt cutters is pretty obvious, and also makes you more likely to be charged with "going equipped for theft or burglary" in many countries.
Also, it costs $99. I'm pretty sure you could buy a fairly secure normal padlock for $99.
Is that the best advice to web programmers they can give based on this story? That's the "obscurity" part in the security by obscurity scheme. If you've got your security otherwise nailed down fine, some obscurity on the top doesn't hurt: security-in-depth, people seem to call that. But use only the obscurity, and only one person has to find out how your scheme works, and it's game over.
I'd, you know, recommend to think about authentication. Your authentication state is not "logged in", it's "logged in as user X". So the code that decides whether a client can see a specific page can and should (!) depend on what specifically you're authenticated as.
Oh, and yes, this company has proved that they don't know the least thing about security. But that was clear already.