thewayne: (Cyranose)
Across the world the theft of smartphones has been a rising crime category while crime overall is trending down. I seem to recall that in New York City that it's the fastest growing crime. Law enforcement across the country and consumers have been begging for legislation requiring cell phone service providers to implement a kill switch, so if your phone is stolen, you can easily have it locked or wiped.

Both Android and iPhones have such capability, I'm not sure about the new generation of Windows smartphones. But you have to be aware of this capability and configure your phone before it's stolen for this to be effective, for iPhones and iPads you install a program called Find My Phone and link it to a free iCloud account: when your device is lost or stolen, you sign on to iCloud and you can lock the device, reformat it immediately, make it beep, display a message that says 'Hey! Return the phone from whence you got it!' or whatever. I don't know how you do this under Android, but I know the capability is there and the process is similar.

Additionally, iOS devices can be configured to wipe themselves after ten failed attempts to get past the security logon, I'm sure Android has a similar feature. So if you think you're going to be arrested, turn off your device and make it that much harder for your phone to be probed. Most smartphones these days are already encrypted but law enforcement forensic tools can typically get part that.

Law enforcement wants this, because it will reduce violent crime: a lot of people get hurt before surrendering their $400 phone. The Federal Department of Justice wants to put a kibosh on this. They say that there's too much of a risk that criminals could have co-conspirators wipe their phone before, and apparently this has happened where a drug gang actually had an IT department who knew to wipe devices if a dealer was arrested.

There's an easy way for law enforcement to preserve evidence. First, turn off the device. Next, in the case of a non-iOS device, remove the battery. Third, put the device in a Faraday bag. This blocks all signals from getting in or out of the device, thus preserving it for when the police get around to getting as search warrant. If the judge decides not to award the warrant or you're released, no harm no fowl. The chickens appreciate the no harm part.

So the Feds want to prevent a technology that would reduce violent crime by making the value of the stolen object pretty much nil, because it would represent a slight increase in the difficulty of doing their job. I wish I had that power, the next time that I get a tech support job I can make it illegal to hire idiots to make my job easier.

http://www.wired.com/2014/04/smartphone-kill-switch/
thewayne: (Cyranose)
Edward Snowden was a featured speaker last month at both TED and SXSW, he teleconferenced in. From his talks, Wired came up with a list of ten things that can be done to improve security and privacy of our information. It's a pretty good list, but not one that the individual can do much with, it's pretty much entirely dependent on being implemented by ISPs and web sites and engineers. Still, it's not a bad start.

http://www.wired.com/2014/03/wishlist/
thewayne: (Cyranose)
Basically, the NSA doesn't want to watch communications on a computer-by-computer basis. They tap backbones, the connections where huge amounts of information flows between internet servers. They tap major ISPs. Your computer? Chump change. If they know what you're saying to other people, they don't really need to tap your computer. And the thing that makes this possible?

Weak routers.

A router takes the packets generated from all of the computers on your network, wired or wireless, aggregates them, and sends them upstream across your connection to another router at your ISP that has a faster connection, which sends them upstream to another router with a faster connection, etc. Eventually your traffic gets to your destination and information comes back, and the routers (also known as hops) between your PC and the server/site that you wanted to access, can deconstruct the information and get it back to its origin. The problem is that routers are not easy to configure, it takes some specialized information, and that if you need to patch it, you risk breaking the configuration. And a broken configuration means down-time, a bad thing.

So most of the time, once a router is working well and the configuration is backed up, it's pretty rare that they're upgraded. The upgrades are risky because a vast majority of businesses don't have a duplicate network set up so that router patches can be tested.

And a router that is not upgraded, just like your computer, is vulnerable to being compromised and exploited.

So the NSA's money is best spent compromising and monitoring the routers upstream of your connection, because there is a lot more information present at that point, so it's more efficient.

Which is not to say that they can't compromise your computer and get in and look at things directly.

There is an old maxim about what defines a secure computer: it's not connected to any communication device, it's turned off, buried in 10' of concrete, and in a locked room with an armed guard. It's highly unlikely that a computer thus secure can be compromised.

http://www.wired.com/threatlevel/2013/09/nsa-router-hacking/
thewayne: (Cyranose)
Interesting opinion piece by a former EFF attorney who now has a boutique law practice for privacy rights. Basically, the courts can't compel you to reveal information that you know that could incriminate you, such as the combination to a safe or the crypto key to computer files, though that one gets tossed back and forth a lot. But they can demand a key to a lock box because it is a physical artifact that is not something that you know, it's something that you possess. They can demand DNA swabs and fingerprints, and could conceivably demand your fingerprint to unlock your phone.

Thus, the argument that fingerprint scanners can undermine the Fifth Amendment protection against self-incrimination.

http://www.wired.com/opinion/2013/09/the-unexpected-result-of-fingerprint-authentication-that-you-cant-take-the-fifth/


Another article on Wired talks about fingerprint scanners meaning the end to PINs for ATMs and such. I don't buy it, there are too many variables for common use of fingerprint scanners. They can work for certain applications, my former police department now uses fingerprint scanners for all of their computer department's PCs and I have a teacher friend who uses it for his Lenovo laptop to keep his students from mucking with it. But they're far from perfect, and if you work in a cold environment or do lots of work with your hands where you get cut up a lot, they can be unreliable.

I'm sure Apple has a alphanumeric entry code to bypass the fingerprint scanner, but it seems to me that if used in conjunction with the scanner for regular use, you're really setting up a usability nightmare.

http://www.wired.com/business/2013/09/iphone-fingerprint-ends-pin/


Mythbusters did a great show on defeating home security devices, including a lock with a fingerprint scanner. They got right past it, one of the techniques they used was to dust the print, scan it at 3-400%, color in the lines with a Sharpie, then reduce it back to 100% and the scanner totally accepted it.
thewayne: (Cyranose)
Yet another DefCon demonstration. In this case, the lock is advertised as secure and flexible because it's easy for the owner to reprogram the lock for a house sitter or whatever, then change the lock back when they need to. It's not a digital lock, needs a key like most others. Two vulnerabilities are demonstrated in videos with this article. The first uses a piercing blade and a hammer, the blade is inserted in the keyway and the hammer whacks it until it pierces the thin metal of the back of the lock. A wire with a loop is then inserted to turn the tailpiece, the thing that actually engages as the lock. Once that's turned, the lock is unlocked and unless there's a very close physical inspection, you can't see that the lock is broken because your key still works in it.

There's another technique that's been around for years called Bumping, after you bump a lock any key will work in it and the lock is physically broken. This is different.

The second technique uses a screwdriver and a pair of pliers. The lock is supposedly rated to 300 pounds-force-inch of torque to turn the cylinder, turns out that it will turn with about a hundred.

Kwikset, of course, denies that these vulnerabilities exist.

http://www.wired.com/threatlevel/2013/08/kwikset-smarkey-lock-vulns/
thewayne: (Cyranose)
Stomping on the brakes of a 3,500-pound Ford Escape that refuses to stop–or even slow down–produces a unique feeling of anxiety. In this case it also produces a deep groaning sound, like an angry water buffalo bellowing somewhere under the SUV's chassis. The more I pound the pedal, the louder the groan gets–along with the delighted cackling of the two hackers sitting behind me in the backseat. Luckily, all of this is happening at less than 5mph. So the Escape merely plows into a stand of 6-foot-high weeds growing in the abandoned parking lot of a South Bend, Ind. strip mall that Charlie Miller and Chris Valasek have chosen as the testing grounds for the day's experiments, a few of which are shown in the video below. (When Miller discovered the brake-disabling trick, he wasn't so lucky: The soccer-mom mobile barreled through his garage, crushing his lawn mower and inflicting $150 worth of damage to the rear wall.) The duo plans to release their findings and the attack software they developed at the hacker conference Defcon in Las Vegas next month–the better, they say, to help other researchers find and fix the auto industry's security problems before malicious hackers get under the hoods of unsuspecting drivers."

I doubt anyone is surprised. If it's a computer, chances are that eventually it will be hacked. Disabling the brakes? Not good. And I believe it's Infiniti is developing a car that has drive-by-wire steering, where the steering wheel is not physically coupled to the front wheels, which means a computer is translating your input (turning the steering wheel) into orders to turn the wheels.

Ford is a little unique in that they have an interface to their car's computer systems that people are allowed to tap in to, someone developed a vibrating shifter for manual transmissions that tells you when to shift, intended for people who are new to stick-shifts. Supposedly this is port doesn't let you in to a modifiable portion of the computer, but still....

http://tech.slashdot.org/story/13/07/25/1732257/hackers-reveal-nasty-new-car-attacks


In other DefCon news, a hack was demonstrated that easily and totally bypassed Volkswagon's security systems, making it really easy to steal their cars and with leaving no trace, giving the insurance companies a potential out by saying there was no evidence of theft. Volkswagon sued in court to keep the information from being disclosed at DefCon and surprisingly won, so they're going to get a little bit of time to cover their butts before more information on this hack gets in to the wild.
thewayne: (Cyranose)
The H is a computer security and news site out of Germany that I have found quite informative, unfortunately the owners have been unsuccessful in monetizing it and they've decided that it is time to close the doors. They were a pretty good news source, and I'm very sad to see them go.

http://www.h-online.com/open/news/item/The-H-is-closing-down-1920027.html


Here's a list of some of their favorite news stories, including the one that speculated that Skype could be listened to by Microsoft and government agencies long before PRISM was revealed.

http://www.h-online.com/features/The-Final-H-Roundup-1919816.html
thewayne: (Cyranose)
A federal judge today rejected the assertion from President Barack Obama’s administration that the state secrets defense barred a lawsuit alleging the government is illegally siphoning Americans’ communications to the National Security Agency.

U.S. District Judge Jeffrey White in San Francisco, however, did not give the Electronic Frontier Foundation the green light to sue the government in a long-running case that dates to 2008, with trips to the appellate courts in between.


It's complex, but there's no way lawsuits over things like PRISM will be simple. But this judge has knocked out the basic defense the administration has been using, so now we'll see how things will proceed.

http://www.wired.com/threatlevel/2013/07/state-secrets-defense/
thewayne: (Cyranose)
A very cogent take from conservatives in another country as to the PRISM et al surveillance state that was slid in to our country with little knowledge of the citizenry.

I especially liked the first comment: "...ONLY credible suspicion should drive surveillance."

http://www.economist.com/news/leaders/21579455-governments-first-job-protect-its-citizens-should-be-based-informed-consent
thewayne: (Cyranose)
Excellent article from the former director of application security at Twitter.

It focuses on several points. First, Federal criminal statute is spread over 27,000 pages. Even the Feds don't know how many laws there are, but it's estimated to be in excess of 10,000. For example, it is illegal to poses a lobster under a certain size. Doesn't matter how you got it, and ignorance of the law is no excuse. It also talks about the sometimes necessity of violating the law to encourage change. In Minnesota, sodomy was illegal until 2001, they recently approved same-sex marriage. If we had 100% effective law enforcement, it would be extremely difficult to get such laws changed because everyone who would benefit from that change would be a branded criminal.

Another: manpower. It used to require law enforcement to commit one or more persons to follow someone. Now we all carry our very own tracking devices, and last year cell carrier Sprint, by itself, responded to 8 million tracking requests from law enforcement. That's pretty much the entire city of New York. It's much easier for law enforcement to relax their standards and be profligate in their information requests since they don't have to invest the manpower resources to follow someone. Myself, I've become tempted to put my phone in to airplane mode just to screw up my tracking data. I have no reason to believe that law enforcement would be interested in me, but I also see no reason to make their jobs easier if they do take an interest. Of course, the question then becomes would me turning off their ability to track me pique their interest in me?

She also mentions license plate scanners. I actually saw those in use in El Paso, 100 miles south of me and a place that we visit every couple of months. If I ever see Phoenix or any of the places that I regularly spend time in getting them, I'm buying one of those LED license plate frames.


I especially like two paragraphs in her conclusion: Some will say that it’s necessary to balance privacy against security, and that it’s important to find the right compromise between the two. Even if you believe that, a good negotiator doesn’t begin a conversation with someone whose position is at the exact opposite extreme by leading with concessions.

And that’s exactly what we’re dealing with. Not a balance of forces which are looking for the perfect compromise between security and privacy, but an enormous steam roller built out of careers and billions in revenue from surveillance contracts and technology. To negotiate with that, we can’t lead with concessions, but rather with all the opposition we can muster.
.


I was recently discussing this topic with a friend, who is part of the "I have nothing to hide" attitude. He surfs porn on the internet. He's also a teacher. I have no idea what flavors of porn he's interested in, and I'm sure they're perfectly kid-safe. But what would happen to his career if that information were released? It could certainly be a career-ending revelation.

I don't have anything in my computers that I'm particularly ashamed of, including browser history, but I don't want it to become public knowledge. The fact that I have nothing in particular to hide doesn't give law enforcement or anyone else the right to stick their nose in it without probable cause and a search warrant. My laptop is encrypted, so is my desktop and all of my backups, also my iPhone backups which do not back up to the cloud. I will not allow my equipment to be casually examined. I will not go gently in to that good night if they take an interest in me, they're going to have to produce a valid search warrant before I unlock anything.


http://www.wired.com/opinion/2013/06/why-i-have-nothing-to-hide-is-the-wrong-way-to-think-about-surveillance/
thewayne: (Cyranose)
User passwords, particularly on Unix/Linus servers, are stored in a single file. The user name is typically stored in clear text, then the password is run through an encryption algorithm, usually with a value called a salt added to the password. But the salt is not always added, which makes passwords more vulnerable. One method of attacking such a password list is known as a dictionary attack. There are files available online that contain a BILLION passwords that have been shunt through the encryption algorithm, then it's just a matter of matching them against entries in the password file that you stole.

Ars Technica submitted a file of 16,000 passwords to three security experts, "and asked them to break them. The winner got 90% of them, the loser 62% -- in a few hours."

The attackers are now using a multiple dictionary attack. If you use a strong root word plus a designator word, you're not as strong as you thought. "Steube was able to crack "momof3g8kids" because he had "momof3g" in his 111 million dict and "8kids" in a smaller dict.
"The combinator attack got it! It's cool," he said."


Schneier goes on to suggest what appears to still be a strong password system: making up a sentence that is significant to you. It's a simple method and he explains it in the article.

http://bruce-schneier.livejournal.com/1210052.html
thewayne: (Cyranose)
The software was sold to the Turkish government. An American woman, who is active in protesting a Turkish organization that runs charter schools in the USA and around the world, received a spearphish email that was crafted for her and appeared to be from a Harvard prof who is also active against this group. Well, they misspelled Harvard, so she didn't open it and forwarded it to a security group.

The security group created a honeypot, which I think is really amazing tech, and they started digging. The web site referenced had all sorts of malware hiding behind it, and the software in question is known to include silent remote-control software. The package pointed back to an American company that sold the software to Turkey, they deny any responsibility for how the software is used, naturally.

Turkey is a NATO country. Technically this could be interpreted as an ally spying on American citizens, not that we would EVER do something like that.

http://www.wired.com/threatlevel/2013/06/spy-tool-sold-to-governments/
thewayne: (Default)
Some interesting arguments to be made that some of us swear our allegiance to specific vendors, assuming protection from our liege lord, but not having any say in how that protection is expressed.

http://www.wired.com/opinion/2012/11/feudal-security/
thewayne: (Default)
It is possible that as many as 17 MILLION passwords were leaked.

A lot of the problem was that all three services, LinkedIn, Last.FM, and eHarmony, stored hashed passwords without a salt value rather than storing an encrypted password. What a hash does superficially looks like encryption, but it isn't. Let's say your password is XYZ. You plug it in to a hash algorithm and it spits out 128 bytes or more of seemingly random data. The problem is something called Rainbow Tables, and the Wikipedia entry for them is quite interesting. When hackers try to break hashed tables, they know that XYZ produces a hashed value of 123456whatever, and without a salt, it does this every time. So if they see a hashed value of 123456whatever, they know the value supplied was XYZ. The rainbow tables contain huge numbers of words passed through hash function generators, so all they have to do is match stolen/captured values against the rainbow tables and they might have usable hits.

If you use a salt value, which is a fixed or repeatable random value appended to the value being hashed, you increase the difficulty of successfully using a rainbow table to break your hashes. So instead of passing XYZ through a hash algorithm, you pass XYZ(salt value), and the salt value is probably different for every implementation because "I" as the system programmer decide on the value or the algorithm that supplies the value before it gets passed to the hasher. So maybe I do a permutation on your email address, and instead of passing XYZwayne@someemail.com, which might be predictable, I pass XYZ_COMwayneSOMEEMAIL which can be consistently duplicated algorithmically. Since 'you' will configure your web site's hasher with a different salt value, if someone steals my hash file and breaks it against a rainbow table, it won't break your hashed values.

Further, adding characters for the salt to the value being hashed greatly increases the difficulty of brute-forcing the original password, and adding a munged-up version of someone's email address will add a lot of bits of entropy.

Here's the sad part: "although eHarmony implores its users to use strong passwords including both upper and lower case letters, it saves the passwords in all upper case, thereby weakening its already weak security further." I quit the bank that I was with because they bought a new online banking system. They forced me to change my password, no problem. The system took my new password, acknowledged that I typed it correctly twice, and I was good for that session. I was then never again able to log on with that password. I'd call their tech support and they'd tell me that it had to be more than X characters long. No problem there, it was about half again longer. Well, it turns out that the password had to be between X and X+3 characters, and the password that I wanted to use was X+4 (or longer) characters. Their software wasn't smart enough to tell me that my password was too long, a combination of bad programming and stupid design, and because of that, they lost me shuffling probably over $100,000 through that account in five years.

http://www.h-online.com/security/news/item/Password-leaks-bigger-than-first-thought-1614516.html
thewayne: (Default)
Adobe Photoshop contains a buffer overflow vulnerability in its TIFF features that has already been the target of a public proof-of-concept exploit, as well as another unspecified security problem that allows attackers to secretly infect systems simply by getting users to open a specially crafted file.

I just bought, new, full student price, Adobe Creative Suite CS5.5, late last year. Probably 7 or 8 months ago. And now they want me to pay $200 for a bug in the system that everyone else patches for free. There's another wonderful quote from the article: "Adobe only makes the general recommendation that its customers should "follow security best practices and exercise caution when opening files from unknown or untrusted sources" as the holes do represent substantial threats."

"Security best practices" from everyone else is to download and install a patch that is freely available from the vendor. This idiocy from Adobe is going to cost them a lot of customers who are going to stop paying for the product and start pirating it. I am very happy with the feature set of the 5.5 suite and see no need to upgrade to PS 6 at this time, so I think I'm going to risk staying unpatched. I don't normally deal with files from untrusted sources, I'll have to be more vigilant about TIFF files, though. The unspecified vulnerability does concern me, though.

And there is proof of concept code for this exploit in the wild. Now that Adobe says it's not going to help people with software less than a year old, it will massively raise the visibility of this bug on the radar of exploiters and IT WILL be targeted.

http://www.h-online.com/security/news/item/Adobe-puts-a-price-tag-on-security-updates-for-Photoshop-and-others-1571517.html

But it's really not a problem! Adobe, all hail, says that Photoshop is not a target, so there's nothing to worry about!

http://www.h-online.com/security/news/item/Adobe-Photoshop-is-not-a-target-for-attackers-1572717.html

EVERYTHING is a target these days. NO SOFTWARE SHOULD GO UNPATCHED. While I hate the amountflood of patches that Microsoft releases, they are very good at patching their products. Apple releases patches at a slower rate, but is also very diligent about patching. Adobe needs to stop seeing this as a revenue stream and recognize that this is a responsibility that, if not fulfilled, is going to cost them customers.

Idiots. I wish I owned some Adobe stock so I could start a shareholder action to whack them upside the head with clue-by-fours.


EDIT: Adobe backs down, will release a patch for PS 5 and 5.5.

http://www.h-online.com/security/news/item/Adobe-backs-down-will-release-patches-for-critical-holes-1574341.html
thewayne: (Default)
A National Security Letter (NSL) is a tool that the FBI can use to compel information from a company or organization without getting an actual court order for the information. Since 2000, they've issued over 275,000 of them, information for two years has not been released and 2011 has not been compiled yet. One part of the NSL is that it contains a gag order: the outfit served by one cannot talk about it or tell the people whom the NSL targets that they are being targeted.

If you are such a served organization, it used to be that to challenge an NSL you had to go to court, now you can file by fax and just say that you're objecting. You're still going to end up in court, but at least the first step is easier. In this case, the company served wanted to notify the target so that the target could mount a legal defense.

Thus the kerfuffle.

NSLs have been challenged in court many times, the challengers frequently accompanied by the likes of the ACLU or EFF, and elements have been decided to be unconstitutional. The problem is that all an FBI agent needs to do to get the NSL is go to the agent in charge of his office and get him to sign off on it. The letter simply states that this information is needed for an on-going investigation. At least with a wire tap order, you have to specify what information that you're looking for and why you have reasonable suspicion that nefarious activity is going on, with NSLs it's just a signature and you're off and running. And the GAO has found massive abuse in the issuing of NSLs, so there are definite problems.

It's definitely a thorny issue.

http://www.wired.com/threatlevel/2012/03/mystery-nsl/
thewayne: (Default)
Details are still coming out, but both are contacting banks. Apparently the hackers got everything, including PINs. So contact your bank about changing your PIN, some banks you can go in to a branch and re-code your card, and keep an eye on your statements for strange activity.

http://wallstcheatsheet.com/stocks/mastercard-gets-hacked.html/


UPDATE: The breach happened at a credit card processor by the name of Global Payments. Their stock dropped 9% before they stopped trading.

http://krebsonsecurity.com/2012/03/mastercard-visa-warn-of-processor-breach/


And Slashdot has a thread:
concealment writes with news that VISA and MasterCard have been warning banks of an incident at a U.S. card processor that may have compromised as many as 10 million credit card numbers. From the article: "Neither VISA nor MasterCard have said which U.S.-based processor was the source of the breach. But affected banks are now starting to analyze transaction data on the compromised cards, in hopes of finding a common point of purchase. Sources at two different major financial institutions said the transactions that most of the cards they analyzed seem to have in common are that they were used in parking garages in and around the New York City area."

http://yro.slashdot.org/story/12/03/30/1454219/visa-mastercard-warn-of-massive-breach-at-credit-card-processor

Here's the thing that I don't get: the hack happened "...between Jan. 21, 2012 and Feb. 25, 2012", so why is it just now being talked about OVER A MONTH LATER.

Also, the main fraud activity seems to be mainly on commercial/business credit cards. Theoretically if your card was issued after Feb. 25, you should be safe.
thewayne: (Default)
Wow. Dude was working as staff at a security conference next door to the RSA conference. Professionally he works as a penetration tester, and he decided to see what he could get away with.

And he got away with everything.

Not only did he get in to the conference and attend some sessions, he got in to the vendor hall before it opened.

http://www.csoonline.com/article/print/701040

http://it.slashdot.org/story/12/02/28/2147247/how-to-sneak-in-to-a-security-conference
thewayne: (Default)
Australia getting full-body x-ray scanners in airports. At least they're displaying stick figures, not full outllines, though probably full outlines are stored internally.

http://yro.slashdot.org/story/12/02/06/145212/full-body-scans-rolled-out-at-all-australian-international-airports

July 2017

S M T W T F S
       1
23 4567 8
910 11 1213 14 15
16 17 18 19 202122
23242526272829
3031     

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 22nd, 2017 12:46 pm
Powered by Dreamwidth Studios