8th March 2017
8th March 2017
24th Feb 2017
Passengers put up with endless hassles from the Transportation Security Administration in hopes it all keeps them safe. So after 11 people got past a JFK checkpoint Monday without being screened, you’d think heads would roll. Hah!
The 11 got through, apparently, because TSA staffers left a security lane open but unmanned. Three set off a metal-detector alarm and still walked on. And TSA didn’t tell Port Authority cops for two hours.
Airport police from around the country call the flap “unconscionable.” TSA has taken its “eye off the ball,” fumed American Alliance of Airport Police Officers co-founder Marshall McClain. Uh, ya think?
True, most of the 11 were tracked down and found not to be threats . . . after they landed.
The agency’s official statement on what comes next: “Once our review is complete, TSA will discipline and retrain employees.” Oh, and it has ID’d the responsible workers and “appropriate action is being taken.”
Retraining. Appropriate action. How about fired ? Sorry, no: TSA staff are a protected branch of the American Federation of Government Employees, one with the hilarious motto: “Stronger Union, Safer Skies.”
Private-sector workers who mess up so badly as to put lives in jeopardy would be gone in a heartbeat. Heck, they’d be fired for far less serious breaches.
Somewhere along the road to making America great again, Mr. President, how about privatizing the damn TSA to end all the maddening “security theater”
13th Feb 2017
If you attend a protest in Washington, D.C., nowadays, better plan on leaving your cellphone at home. That is, unless you want police to confiscate it, mine it for incriminating information and then gather even more data from their BFF — Facebook.
At least one person arrested during protests on Inauguration Day got an email from Facebook’s Law Enforcement Response Team alerting them that investigators wanted access to their data. Another received a Facebook data subpoena.
The email was basically a countdown to when Facebook inevitably handed that data over to D.C. police. That is, unless the respondent figured out how to file an objection within a 10-day window.
When over 230 people were arrested in D.C. during protests against Donald Trump last month, many of those rounded up were not part of the protests. Cops swept up medics, legal observers, and six journalists from Voactiv, RT America and others.
All of their phones were confiscated and retained.
Everyone arrested now faces felony charges and up to 10 years in prison. In the Bay Area, where we love a good protest, it’s very rare that arrested protesters get prosecuted. So it’s odd to think that protesters would have their social media scrutinized after an arrest. Though, like in most cities across America, it’s extremely common for investigators to search the social media of suspects in other crimes if they believe that the suspect posted something related (like photos of a beating). SFPD even has an officer devoted to following social media — most heavily, Snapchat and Instagram, as those are apparently where you find the best crime stuff.
Oakland Police and supporting agencies like California Highway Patrol have been very transparent about monitoring Twitter to determine protest movement and plans. And we’ve been pretty vocal about pushing back. It only makes sense that we’d resist any form of surveillance, seeing that we’re ground zero around here for ethically challenged startups that invade our privacy. Fighting the surveillance state has become part of our DNA. But a wide-ranging Facebook subpoena for felony protest prosecution isn’t something we’ve seen the likes of.
The subpoena issued to Facebook (this one by the U.S. Attorney’s Office on January 27, 2017 and signed off on by a D.C. Metropolitan Police Detective) obtained by press this week is chilling. It targeted another inauguration arrestee, and requests subscriber information from Facebook that includes all names, all addresses (home, business, emails), phone records, session details (IP, ports, etc), device identification info, payment information, and more.
CityLab explained, “The redacted blocks on the second page shield columns of phone numbers, which are connected to other arrestees for whom the district attorney and police are seeking information.”
The list of phone numbers may indicate that police have gained access to someone’s phone and are building a case with what they found. A screenshot provided to CityLab indicates police began mining information from the confiscated devices right after the arrests.
On one hand, that could’ve been automated pinging by Gmail to Google’s servers. Or, it could’ve been something darker. When phones are taken as evidence, they’re supposed to be secured in a signal-blocking Faraday bag to prevent remote wipes. Fred Jennings, a cybercrime defense attorney at the firm Tor Ekeland in New York, told press: “If it had been secured properly and placed in the bag to safeguard it, there’d be no way for it to ping the server.”
For some of us, this sets off a different set of alarms. It’s scary enough that police are arresting journalists and mining our phones for all the terrifyingly detailed data Facebook seems all too happy to give up. But authorities with questionable intent are also collecting our contacts, and pose a very real risk for our protected sources.
Some of this could be solved by ditching our devices in favor of carrying on-the-scene burner phones. But this presents a new host of complications and problems, even for the well-intentioned protester or march participant. For one, it’s a hassle for most people. It also defeats the purpose of using your Twitter or Facebook account. More than ever, it’s vital that our voices are heard through media we share from our phones. Things like immigration-ban protests and the state-level denial of chaos at the airports can’t be dismissed when the realities are documented through our established Facebook and Twitter accounts.
Keeping a record of what authorities do to us, and being able to send a signal flare for help to our networks, makes them being used against us a much bigger problem than just saying “leave your phone at home” or “don’t talk about the protest online.”
It’s not a stretch to lay blame at Facebook’s feet for taking data we don’t necessarily want to give it, and for its well-established collaboration with police against its users. It’s a bigger stretch to suggest that the agreement between Facebook and its users is any kind of informed consent.
It’s interesting that this news comes up the same week that 333,000 people signed a petition demanding Facebook improve its corporate citizenship, with 1,500 of the signees being company shareholders. That document led to a proposal to remove Mark Zuckerberg from the board.
This, it said, was necessary at a time when Facebook “faces increasing criticism regarding its perceived role in the promotion of misleading news; censorship, hate speech and alleged inconsistencies in the application of Facebook’s community-standards guidelines and content policies; targeting of ad views based on race; collaboration with law enforcement and other government agencies; and calls for public accountability regarding the human-rights impacts of Facebook’s practices.”
It’s that collaboration with law enforcement and human-rights accountability we’ll be hearing more about as the D.C. arrest cases unfold. It’s not a new story, just an old one with a twist: Facebook got called out just before the US presidential election for colluding with authorities against its users’ human rights, specifically US police departments. A coalition of 70 human-rights groups, including the ACLU, wrote a public letter to Facebook condemning the company’s zeal in doing police bidding around the world.
Facebook, of course, just wants us to live our lives so it can keep collecting data we don’t even know we’re creating. Recording and storing our location, connections, contacts, experiences, our secrets and our history.
It’s transforming our memories into a malevolent, atavistic shadow that someday may be used against us in a court of law.
8th Feb 2016
Queensland’s privacy commissioner is reviewing new ‘big brother’ surveillance technology being used to record video and audio of members of the public in the Moreton Bay area.
Yesterday, the Moreton Bay Regional Council announced it had deployed about 330 new devices in public spaces, with plans to install dozens more.
Mayor Allan Sutherland said it would help boost community safety.
“Moreton Bay Region now has the ability to not only see what’s going on, but to be able to hear what’s going on,” he said.
“We don’t listen on a daily basis; as requested if the police come along and say: ‘Can we have the footage?’
“Unless you’ve got anything to hide, you haven’t got anything to worry about.”
The devices record and store data for several weeks.
Queensland’s privacy commissioner Phil Green said he was enquiring to see if the use of the technology breached privacy laws.
“I’m still in the fact-finding mode — I obviously don’t act rashly, I’m trying to look into this and have a rational good public debate on the issue,” he said.
“If the public aren’t happy with this sort of development, then the State Government can enact laws, but I think the laws already possibly stop this sort of thing happening.”
He said the private sector could soon follow suit, unless privacy laws were clarified.
“Do the public want it? Because if councils do it, then the universities do it, and the hospitals do it,” Mr Green said.
“If it’s one council doing it, then it could be all the councils doing it across Australia, so we do need to look at it carefully.”
Mr Green said his office was only informed last week.
“I understand my office did receive a draft press release about it, but very scant on details and of course that’s probably not the best way of going about launching something about this when it involves a fair investment,” Mr Green said.
Councillor Sutherland was unable to comment on the development, but in a statement a spokesperson said the council had not breached any laws.
“Council provided a copy of its proposed media release and advisory signage to staff from the Office of the Information Commissioner,” the statement said.
“Council is satisfied its use of the CCTV footage and audio is consistent with its obligations under the Information Privacy Act.”
In the last budget, council announced $801,000 would be spent on upgrading surveillance cameras across public areas.
Some of the new cameras have been deployed in locations including Centenary Lakes Park in Caboolture, Burpengary Sports Precinct, and Bee Gees Way at Redcliffe.
Queensland Law Society president Bill Potts said the public should be concerned.
“It seems that not only big brother is watching but in the guise of the Moreton Bay council, he’s also listening,” Mr Potts said.
“I can understand why people in a public place may have no expectation of privacy, but their ordinary conversations about their friends, about their families, about their work and just the ordinary social chit-chat, should always remain completely sacrosanct.
“If the Mayor was fair dinkum in his argument that only those people who have something to hide would object to being listened to by the Moreton Bay Regional Council, perhaps he could volunteer to be listened to seven days a week, 24 hours a day, by his constituents before he understand the value of privacy.”
28th Jan 2017
The state of New York has privately asked surveillance companies to pitch a vast camera system that would scan and identify people who drive in and out of New York City, according to a December memo obtained by Vocativ.
The call for private companies to submit plans is part of Governor Andrew Cuomo’s major infrastructure package, which he introduced in October. Though much of the related proposals would be indisputably welcome to most New Yorkers — renovating airports and improving public transportation — a little-noticed detail included installing cameras to “test emerging facial recognition software and equipment.”
“This is a highly advanced system they’re asking for,” said Clare Garvie, an associate at Georgetown University’s Center for Privacy and Technology, and who specializes in police use of face recognition technologies. “This is going to be terabytes — if not petabytes — of data, and multiple cameras running 24 hours a day. In order to be face recognition compliant they probably have to be pretty high definition.”
Cuomo’s office didn’t respond to multiple requests for clarification in the ensuing weeks after his announcement. But a memo from the Metropolitan Transit Authority’s Bridges and Tunnels division, obtained through a Freedom of Information Act request, shows that on December 12, the MTA put out a call to an unknown group of private vendors of surveillance equipment. The proposed system would both scan drivers as they approached or crossed most of the city’s bridges and tunnels at high speeds, and would also capture and pair those photos with the license plates of their cars.
“The biggest risk that comes with a system like this is its ability to track people, by location, by their face,” Garvie said. “So what needs to be put in place is a prohibition on the use of these cameras and the technology as a location tracking tool.”
The proposed system would be massive, the memo reads:
The Authority is interested in implementing a Facial Detection System, in a free-flow highway environment, where vehicle movement is unimpeded at highway speeds as well as bumper-to-bumper traffic, and license plate images are taken and matched to occupants of the vehicles (via license plate number) with Facial Detection and Recognition methods from a gantry-based or road-side monitoring location.
All seven of the MTA’s bridges and both its tunnels are named in the proposal.
New York City is home to more than 2,000 bridges and tunnels, which are owned by various agencies, including the New York City and state’s Departments of Transportation and Amtrak. It’s unclear as of this writing if those “crossing points” are similarly considering surveillance technology, though Vocativ has filed FOIA requests to each of them. Cuomo’s office didn’t respond to multiple inquiries. It’s similarly unclear how many, or even if any, private surveillance companies responded to the MTA’s proposal. A followup memo on Dec. 23 extended the deadline for submissions until Jan. 3, indicating the MTA wasn’t satisfied with the initial round of proposals.
New York City wouldn’t be the first in the U.S. to have a network of facial recognition cameras for law enforcement. In 2013, for instance, the Los Angeles Police Department admitted it had deployed 16 cameras equipped with face recognition software, designed to search for particular suspects. But the most prominent known system is in Moscow, which attempted to pair hundreds of thousands of CCTV cameras with advanced facial recognition software by NTech Labs — the company behind the infamous FaceFind software, which can let Russians stalk people whose picture they snap by using the program to find their social media accounts.
Moscow’s system has been beset with problems, though, especially because CCTV cameras are designed to move with subjects, reducing image quality, and because they’re normally mounted above people’s heads.
“The findings from phase one of the pilot are that it’s remarkably inaccurate,” Garvie said. “This is the most advanced system we’re aware of, but it’s having a very hard time in real-world conditions of people walking.”
That indicates that an effective system like the one the MTA has called for might still be years away.
“The New York crossings project is talking about people driving at highway speeds, so I think we can expect very, very low accuracy rates,” Garvie said.
24th Jan 2017
A plan to rely on biometric recognition to further automate airport border processing raises privacy and ethical concerns about data security, according to an expert.
But another information security analyst says the plan – which would involve 90% of passengers being processed through Australian airport immigration without human involvement – would not present any more privacy concerns than current border control regimes.
The Department of Immigration and Border Protection is tendering for a company to provide it with an “automated processing solution” to support its “seamless traveller” plan, which would allow for the automated processing of passengers using biometric identification.
The department said it was expecting incoming air passengers to Australia to increase dramatically in coming years, and wanted to ensure they could move seamlessly through airports without compromising border security.
However, University of Wollongong tech and biometrics expert Prof Katina Michael said such technology had not been proven to have improved security or airport efficiency.
Michael said the plan posed a risk to individual privacy and raised ethical dilemmas that had not been properly explained to the public.
“We are steam-training right through all of these technological transitions and we’re not really thinking about the ramifications,” she said. “Even if the system works, is that ethical to impose this system on the entire populace, without even asking them? I see the perceived benefit, but what I do know is that there will be real costs, human costs, not only through the loss of staff through automation, but also through discrimination of people who may appear different.”
Michael said recent threats to the security of government-held data such as the census failure should raise real concerns about the storage of biometric data en masse.
“I am worried about theft, I don’t buy the story that your data is safe. I think we’ve become almost complacent ‘oh there’s been another data breach. Oh they hacked in and stole the data’,” she said. “Is the next phase of rollout going to be ‘oh my e-health records were taken’, ‘oh my biometrics at border control were taken’?”
But others have played down concerns about the government’s plan. Information security expert and reporter Patrick Gray said airport passengers were already the subject of heavy surveillance and biometric testing.
Gray said the government’s plan appeared to simply make the recognition process less clunky than the current SmartGate systems used in Australian airports.
“Airports are already among the most surveilled places on the planet. The time to be worrying about this is when someone seriously proposes running live facial recognition against CCTV in public places like city streets and train stations with insufficient oversight on use. Then we’ve got a problem,” he said.
“Better, highly-automated facial recognition is going to be a massive privacy issue one day, but the technology at least makes sense in airports.”
According to tender documents, the government wants to replace the incoming passenger card, eliminate the need for physical tickets at border control, and allow some passengers to travel using contactless technology, which would remove the need to present a passport.
Manual marshall points for triaging passengers would be removed, and replaced with more automated processes. The technology would be trialled at Canberra airport, and later deployed at nine Australian airports.
20th Jan 2017
The Central Intelligence Agency on Wednesday unveiled revised rules for collecting, analyzing and storing information on American citizens, updating the rules for the information age and publishing them in full for the first time.
The guidelines are designed “in a manner that protects the privacy and civil rights of the American people,” CIA General Counsel Caroline Krass told a briefing at the agency’s headquarters in Langley, Virginia.
The new rules were released amid continued public discomfort over the government’s surveillance powers, an issue that gained prominence following revelations in 2013 by former government contractor Edward Snowden that the National Security Agency (NSA) secretly collected the communications data of millions of ordinary Americans.
The guidelines were published two days before President elect-Donald Trump is sworn into office and may be changed by the new administration. Trump has said he favors stronger government surveillance powers, including the monitoring of “certain” mosques in the United States.
The CIA is largely barred from collecting information inside the United States or on U.S. citizens. But a 1980s presidential order provided for discrete exceptions governed by procedures approved by the CIA director and the attorney general.
Known as the “Attorney General Guidelines,” the original rules over time became a “patchwork of policies and procedures” that failed to keep pace with the development of technology that can store massive amounts of digital data, said Krass.
In 2014, legislation gave U.S. intelligence agencies two years to develop procedures limiting the storage of information on U.S. citizens.
The new procedures, under development for years, were signed on Tuesday by CIA Director John Brennan and Attorney General Loretta Lynch.
While the 1982 guidelines were made public two years ago, sections were blacked out. The updated procedures were posted in full for the first time on the CIA’s website on Wednesday.
The updated procedures include what the CIA must do when it clandestinely obtains a computer hard drive holding millions of pages of text, hours of videos and thousands of photos containing information on foreigners and U.S. citizens.
Because extensive time and many analysts are required to assess such large volumes of data, the new rules regulate the handling of material whose intelligence value cannot be promptly evaluated.
They also regulate how such data can be searched and create strict requirements for dealing with unevaluated electronic communications, which must be destroyed no later than five years after the are first examined.
The rules were unveiled a week after civil liberties groups decried new guidelines approved by the Obama administration expanding the NSA’s ability to share communications intercepts with other U.S. intelligence agencies, including the CIA.
18th Jan 2017
About 13 million pages of declassified documents from the US Central Intelligence Agency (CIA) have been released online.
The records include UFO sightings and psychic experiments from the Stargate programme, which has long been of interest to conspiracy theorists.
The move came after lengthy efforts from freedom of information advocates and a lawsuit against the CIA.
The full archive is made up of almost 800,000 files.
They had previously only been accessible at the National Archives in Maryland.
The trove includes the papers of Henry Kissinger, who served as secretary of state under presidents Richard Nixon and Gerald Ford, as well as several hundred thousand pages of intelligence analysis and science research and development.
Among the more unusual records are documents from the Stargate Project, which dealt with psychic powers and extrasensory perception.
Those include records of testing on celebrity psychic Uri Geller in 1973, when he was already a well-established performer.
Memos detail how Mr Geller was able to partly replicate pictures drawn in another room with varying – but sometimes precise – accuracy, leading the researchers to write that he “demonstrated his paranormal perceptual ability in a convincing and unambiguous manner”.
Other unusual records include a collection of reports on flying saucers, and the recipes for invisible ink.
While much of the information has been technically publicly available since the mid-1990s, it has been very difficult to access.
The records were only available on four physical computers located in the back of a library at the National Archives in Maryland, between 09:00 and 16:30 each day.
A non-profit freedom of information group, MuckRock, sued the CIA to force it to upload the collection, in a process which took more than two years.
At the same time, journalist Mike Best crowd-funded more than $15,000 to visit the archives to print out and then publicly upload the records, one by one, to apply pressure to the CIA.
“By printing out and scanning the documents at CIA expense, I was able to begin making them freely available to the public and to give the agency a financial incentive to simply put the database online,” Best wrote in a blog post.
In November, the CIA announced it would publish the material, and the entire declassified CREST archive is now available on the CIA Library website.
9th Jan 2017
The president of the UK’s Police Superintendents’ Association, Gavin Thomas, argued that preventing those who’ve committed internet crimes from being able to be online would be a far better and more cost effective than sending them to jail.
Cyber crime is soaring in the UK according to 2016 figures from the Cyber Crime Assessment unit of the UK National Crime Agency, which states it had surpassed all other forms of criminal activity. The report found that “cyber enabled fraud” made up 36 percent of all crime reported, and “computer misuse” accounted for 17 percent.
Speaking to The Telegraph, Chief Superintendent Thomas said that sending cyber criminals to prison is expensive and not an appropriate or effective way of tackling the growing problem.
Instead, he has suggested fitting offenders with electronic wifi jammers that would prevent them from accessing the internet. Wifi jammers work by disrupting the frequency on which a signal is transmitted. Thomas suggested that the device could fit around the person’s wrist or ankle, similar to an electronic tag.
“We have got to stop using 19th century punishments to deal with 21st century crimes,” he said. “It costs around £38,000 a year to keep someone in prison but if you look at the statistics around short term sentencing the recidivism rate is extraordinarily high.”
“This could be introduced as part of community sentencing, so that the 16-year-old does not have access to the internet or wifi for a period and then in conjunction they have to do some sort of traditional work in the community,” he suggested.
Thomas said the criminal justice system also needs to find ways of tackling the growing problem of crime committed on social media. “There is a growing phenomenon here and we need to start to think in the future about how we deal with this.”
The use of facial recognition software for commercial purposes is becoming more common, but, as Amazon scans faces in its physical shop and Facebook searches photos of users to add tags to, those concerned about their privacy are fighting back.
Berlin-based artist and technologist Adam Harvey aims to overwhelm and confuse these systems by presenting them with thousands of false hits so they can’t tell which faces are real.
The Hyperface project involves printing patterns on to clothing or textiles, which then appear to have eyes, mouths and other features that a computer can interpret as a face.
This is not the first time Harvey has tried to confuse facial recognition software. During a previous project, CV Dazzle, he attempted to create an aesthetic of makeup and hairstyling that would cause machines to be unable to detect a face.
Speaking at the Chaos Communications Congress hacking conference in Hamburg, Harvey said: “As I’ve looked at in an earlier project, you can change the way you appear, but, in camouflage you can think of the figure and the ground relationship. There’s also an opportunity to modify the ‘ground’, the things that appear next to you, around you, and that can also modify the computer vision confidence score.”
Harvey’s Hyperface project aims to do just that, he says, “overloading an algorithm with what it wants, oversaturating an area with faces to divert the gaze of the computer vision algorithm.”
The resultant patterns, which Harvey created in conjunction with international interaction studio Hyphen-Labs, can be worn or used to blanket an area. “It can be used to modify the environment around you, whether it’s someone next to you, whether you’re wearing it, maybe around your head or in a new way.”
Explaining his hopes for how technologies like his would affect the world, Harvey showed an image of a street scene from the 1910s, pointing out that every figure in it is wearing a hat. “In 100 years from now, we’re going to have a similar transformation of fashion and the way that we appear. What will that look like? Hopefully it will look like something that appears to optimise our personal privacy.”
To emphasise the extent to which facial recognition technology changes expectations of privacy, Harvey collated 47 different data points commercial and academic researchers claim to be able to discover from a 100×100 pixel facial image – around 2.5% of the size of a typical Instagram photo. Those include traits such as “calm” or “kind”, criminal tendencies like “paedophile” or “white collar offender”, and simple demographics like “age” and “gender”.
Research from Shanghai Jiao Tong University, for instance, claims to be able to predict criminality from lip curvature, eye inner corner distance and the so-called nose-mouth angle.
“A lot of other researchers are looking at how to take that very small data and turn it into insights that can be used for marketing,” Harvey said. “What all this reminds me of is Francis Galton and eugenics. The real criminal, in these cases, are people who are perpetrating this idea, not the people who are being looked at.”
Harvey and Hyphen-Labs plan to reveal details on the Hyperface project this month, as part of Hyphen-Labs’ new work NeuroSpeculative AfroFeminism.