Privacy / Security
April 01, 2014
Imagine We Had No Transaction Receipts...
So, imagine you go to the store, you ask to buy a coffee, there is no cash register, no transaction receipt it given to you, but you are handed the coffee. They don't say anything. You payment is invisible. You don't know how much it will be but you agree to the opaque terms. If you get food poisoning later, it's going to be a huge hassle proving you where there, but it's possible. However, the authorities in charge of checking out food poisoning issues would need some proof. Maybe you threw away the cup, maybe you still have it. Maybe there is video surveillance and maybe not.
No receipt for tax purposes, or proving the cost from the vendor, or your expense report, or documentation about what you purchased.. no warranty or food safety proof, no date or time or place or anything. You just have a cup of coffee.
That's what it's like to go to a vendor online or on your phone, make an account and share some data. You do get something, but you don't really know what you "paid," you have no receipt after you agreed to get the service, and you have nothing from the vendor, other than maybe the confirmation email you received.
Now imagine the opposite:
That is the Open Notice and Consent Receipt system from the user perspective.
March 31, 2014
"Big Data" if Unspecfic, is Ridiculous
Here is a more specific look at what Big Data means, as a term:
There is your data, there is "little data" where when you share it, it's wrapped around you as the user, centralized. And that's "Big Data" that is really a large amount of "Little Data." Then there is Big Data that you as a user co-create with a vendor or service, that is relatable back to you but it's wrapped around objects, data models and identifiers that are first about the object and not about you. And then there is aggregated data that is depersonalized .. though it may still be possible with some detective work to find you.
My point in making this distinction is to note that talking about Big Data in an unspecific manner is a great opportunity to misunderstand, to miss potential solutions that apply to parts of this scale, but not all, and to talk past each other when we are discussing problems and solutions in the privacy arena.
February 19, 2014
Who says kids don't value privacy? And who says they won't pay for it? WhatsApp and Privacy
One of the interesting elements for me here is that kids were okay giving WhatsApp their data, for then (for now?), knowing there would be no ads, because it created "parent privacy" though the app, and reduced their costs sending TXT messages through the telcos.
I pay $20 a month for a flat rate of unlimited TXT msgs, SMS, *and* unlimited free cell-to-cell calls. I did it for the calls.. which anytime are 10cents during the day. I moved my plan from the 4th highest minutes, to the lowest, because almost all my calls are to other cells.
However, because I went from 500 texts (and 25cents for each additional) to unlimited, I now use about 2k texts. But every text is listed, time, date, phone number, on my bill, and that's easily sortable online if you log into the cell company's website. And my telco and many other apps have access to those messages.
Parents that want to track their kids, just sort the calls, track the times, etc.
Kids are paying $1 to both stop any additional costs for texting, and to stop the tracking.
I think this is a very interesting development.
What data does WhatsApp see in your phone?
Your phone has more intimate data about you than Facebook, in many ways because it's implicit, not explicit. WhatsApp doesn't need you to tell them your favorite movies or where you live; they know through the discussions, they know your real friends list based upon contacts and activity in your phone.
Here is the list of the data you agree to give WhatsApp for an Android install:
Your SMS messages
Storage -- contents of your USB storage
System tools: all shortcuts -- plus modify shortcuts including installing them and uninstalling them
Your location: AGPS and GPS
Microphone: record audio
Camera: take pictures and video, see your photos and video
Your application information: retrieve any running app, find all apps
Your personal information: read your own contact card
Your accounts: add or remove accounts, create accounts and set passwords, use accounts on the device
Network communications: connect and disconnect from wi-fi, full network access
Phone calls: direct call phone numbers, read phone status and identity of phone
Your social information: modify your contacts, read your contacts
Sync settings: read sync settings, read sync statistics, toggle sync on and off
System tools: modify system settings, test access to protected storage
Affects Battery: control vibration, prevent phone from sleeping
Your applications information: run at startup
Network Communications (a second listing): Google play billing service, receive data from Internet, view Wi-fi connections, view network connections
Your accounts (second listing): Find accounts on device, read Google service configuration
That's a lot of info. I would argue that this is more personal information that what you post voluntarily on FB.
But I think the kids were looking for Parent-Privacy, not Privacy from Telcos, the government or data aggregators mostly. And WhatsApp gives it to them, and reduces the costs of text messaging on the phone to $1 year.
Brilliant, and worth every penny of the $16-19b Facebook paid, What'sApp is reported to have 450m active users.. divide that into 19b and you get $45 a user.. or $16b is $35 a user.
When Flickr was bought, Yahoo paid $111 a user. With revenue of $25 a person x 60,000 paid users.
Myspace was $36.
Instagram was $28.
Skype was a whopping $264.
See more at Statista.
I don't know how many paid users WhatsApp has, but the service is free the first year, then $.99 a year after that. I suspect we'll find out how many at the next quarterly call Facebook has, because I can't find anything with that number out there now.
But WhatsApp sold for an amount that is comparable for a "consumer" service. And reasonable, even if $19b is a mind-blowing number in the scheme of things.
February 09, 2014
Data Privacy Legal Hack-A-thon, Day 2: Projects
UPDATED: As we get down to the wire on presentations tonight at 5pm, the room is quiet and everyone is working hard. One of our judges, K. Waterman is walking around, conversing with whomever has a minute. And we have settled out to these project teams:
Safe Sign-up: This will encrypt volunteer signups for events, especially protests, so that there is not one place that would have all the people at the event. Event organizers would have 5th Amendment protection for this information. By: Zaki Manian, Restore the Fourth, SF.
Bring your Own Chat: A secure zero-knowledge chat application using only Drop Box. By: Daniel Roesler, Restore the Fourth, SF. The project can be found here at Github: https://github.com/diafygi/byoFS.
Privacy Enhancing Toolkit: A toolkit for encrypted communications, file storage and sharing. By Judi Clark & Jenny Fang.
Bitcoin Privacy Documentation: Developing a framework for thinking about the privacy of financial transactions using Bitcoin. By: Alice Townes, Richard Down.
Mobile Privacy Shield: Intercept and display all the async calls for websites using a Firefox add-on. By: @nyceane.
I'm working on a presentation for tonight at the closing for the ON project and consent receipt.. not to be judged... just to show the concept to the room.
February 08, 2014
Data Privacy Legal Hack-A-thon, Day 1
We have five (5) projects going in San Francisco at the Data Privacy Legal Hackathon. After an initial introduction phase,
and discussions, teams broke out and are all quietly working away.
We have 3 groups and 2 individuals who are working on projects..
After we talked a bit, he realized the value of the parts I'm working on with the Consent Map, Consent Receipt and various tools to make that happen, like the API project to the map. We went over the whole ecosystem we all propose and he sees the complementarity.
Here is a diagram of that shows some of the different products that we discussed above:
But that group is more interested in getting privacy policies structures and visualized than the other side of the transaction which would look at terms an individual would submit, like Do Not Track. However, they recognized that there is a need for a consent receipt at the end of either side setting a term.
There is also a bitcoin thing for more private transactions for identity privacy (ie, taking things outside the financial networks, where you still have some kind of identity inside bitcoin, to taking things outside the identity systems in bitcoin..). I don't totally understand it but that's what they are talking about and trying to figure it out.
There is an https server project, and another individual project that I haven't yet discussed with the maker.
I'm working on the consent receipt. Other groups are likely want to hook into the consent receipt when they have their pieces.
January 27, 2014
The New American Radical: Upholding the Status Quo in Law (IE the Constitution)
So what does that mean... the Status Quo? What I mean by that is the body of law we count on, that we base everything on, already in place: the Constitution, the Bill of Rights (amendments 1-10) and the rest of the Constitutional Amendments. That status quo.
And wanting to just maintain the Status Quo, uphold and use it, as our standard of law, as the basis for what we do in the US? Yea, supporting that is the New American Radical act amongst the New American Radicals (you can count me amongst them as that's the system I signed up for... the one with the Constitution).
How can this be? Asking for such should be a traditionalist thing, leaving the radicals to ask for new amendments, change 'you can believe in' yada yada and other controversial innovations to the law? But no.. it's a radical act in America these days to just ask that we uphold the Constitution, the Bill of Rights and the Amendments.
I realized this is true, the other night, when I went to hear Daniel Ellsberg speak, along with Cindy Cohn of EFF, Shahid Buttar and Norman Soloman, along with Bob Jaffe moderating. And yes.. Ellsberg's an American Radical, but not just because he got the Pentagon Papers out 40 years ago. It's because he believes in the Constitution, the Bill of Rights, our other Amendments to be the rule of law. He had some very interesting things to share as well.
Ellsberg talked about how years ago, "Richard" Cheney (as he called him.. I'm so used to "Dick") communicated a desire to change the constitution because he thought it was wrong, and that it should be different. Ellsberg said that that's okay, but then you have to change things through the system. Instead, Cheney and Bush and others have been corrupt, because they got elected, swore an oath to "defend the Constitution of the United States against all enemies, foreign and domestic" but then subverted the rules they swore to uphold. (I knew they weren't honorable men, but I never thought about it in these terms.)
So in this case, they are the enemies, these corrupt parties, who subvert the Constitution, by taking, ".. your tax dollars, taken in secret, and spent in secret, to spy on everyone."
Ellsberg's example of a founding father who parallels the whistleblower / leaker of today is Nathan Hale, the man who was caught by the British and hanged in 1776 for trying to share information with his own countrymen, Americans, about what the British were doing. Hale's famous line is: "I only regret that I have but one life to give for my country."
What if we hanged people like that today, the people who leaked the full breadth of what was happening at Abu Ghraib instead of the public just seeing the sanitized, reduced version that claimed it was just a few isolated incidents, when in fact the torture at Abu Ghraib was huge and widespread and very shameful for us and our government? Or the Extraordinary Rendition program? Or Warrantless Wiretapping?
All these secretive activities changed when they became public. And they changed as a result of whistleblower-leakers sharing information the government didn't want to get out, with the exception of Congress legalizing Warrantless Wiretaps once that activity became public. And now things are changing again because of Edward Snowden and the NSA surveillance information he let out.
Ellsberg said, "To have knowledge of every private communication, every location, every credit card charge, everything.. to have one branch have power over the other two (executive, over legislative and judicial).. Snowden has confronted us with something that we could change.... But Obama is part of the problem. He just assures us that there is nothing to worry about. But who is to be trusted? The people who kept the secrets and lied to us? Diane Feinstein? Or do we trust Snowden? Snowden has done more to support the Constitution than any Senator, Congressman, the NSA ... "
Ellsberg also talked about how when he was in trial, 40 years ago, he was out on bail, and could speak freely with the press. Today, if Snowden were on trial, he'd be in a hole, like Chelsea Manning. We wouldn't hear his thoughts on the issues in the trial, because the government would stop it, in trial and outside.
During Ellsberg's trial, his lawyer tried about 5 times to get motive into the questioning, but the prosecution kept objecting. Motive didn't matter they said, and the judge agreed. The same thing would happen to Snowden, who would never be able to say, on the stand, why he did what he did.
Cindy Cohn who has heroically been bringing law suit after law suit to stop some of these illegal practices, talked about how originally the FISA court started out approving targeted warrants -- so at least they knew who was targeted. But things have devolved, to where the FISA court is now presented with massively expanded, abstract warrants that don't even have the FISA court knowing who specifically is targeted. Smith vs Maryland, which ruled on the pen register method of an unwarranted wiretapping of a single land line, "..doesn't even pass the giggle test" when applied to the massive surveillance we undergo now.
In fact, she said that, "Technology is our friend, encryption is our friend." That while major companies have been compromised, we need to develop technologies to help us, as much as we need to use legislative policy and the judicial system to fix this. Even companies, 5 large tech companies, had to get together last week and tell the government to stop hacking them, or they would lose customers and be severely affected.
Cindy recommended we tell legislators to vote against the sham FISA Improvement Act, and instead support the USA Freedom ACt and the Surveillance State Repeal Acts, which have bi-partisan congressional support.
"The days in which you can separate corporate surveillance and government surveillance are over.... The 3rd party doctrine undermines privacy, because *we all* give our data to 3rd parties." She went on to say that the tools for organizing against each type of collection are different, but the issues are similar.
Lastly she noted that for 9/11, collection wasn't the gap. They knew about the guys. Sharing between agencies was the gap. Yet we haven't solved for that but we are collecting like mad!
One other mention, Shahid Buttar spoke, but also performed a prose rap he's written, and he's running a Kickstarter to raise money (it's up Feb 6 so donate now) to do a professional video. (Reminds me a bit of Eddan Katz's Revolution is Not an AOL Keyword).
Note also that we are doing the Data Privacy Legal Hackthon in 12 days !! Join us to work on this problem technically in SF, NYC and London, or join us online if you can't make it in person.
Whether you support the artistic, legal or technical ways of addressing massive government surveillance and the subversion of the Constitution, stand up for your rights under the constitution.
Feel what it's like to be a Radical American!
Because you probably are a Radical American! Just like our forefathers and foremothers.
If you believe in the Rule of Law and the Constitution.
January 16, 2014
Data Privacy Legal Hack-A-thon
This is an unprecedented year documenting our loss of Privacy. Never before have we needed to stand up and team up to do something about it. In honour of Privacy Day, the Legal Hackers are leading the charge to do something about it, inspiring a two-day international Data Privacy Legal Hackathon. This is no ordinary event. Instead of talking about creating privacy tools in theory, the Data Privacy Legal Hackathon is about action! A call to action for tech & legal innovators who want to make a difference!
We are happy to announce a Data Privacy Legal Hackathon and invite the Kantara Community to get involved and participate. We are involved in not only hosting a Pre-Hackathon Project to create a Legal Map for consent laws across jurisdictions, but the CISWG will also be posting a project for the Consent Receipt Scenario that is posted in on the ISWG wiki.
The intention is to hack Open Notice with a Common Legal Map to create consent receipts that enable ‘customisers’ to control personal information If you would like to get involved in the hackathon, show your support, or help build the consent receipt infrastructure please get involved right away — you can get intouch with Mark (dot) Lizar (at)gmail (dot) com, Hodder (at) gmail (dot) com, or join the group pages that are in links below.
Across three locations on February 8th & 9th, 2014, get your Eventbrite Tickets Here:
This two-day event aims to mix the tech and legal scenes with people and companies that want to champion personal data privacy. Connecting entrepreneurs, developers, product makers, legal scholars, lawyers, and investors.
Each location will host a two-day “judged” hacking competition with a prize awarding finale, followed by an after-party to celebrate the event.
The Main Themes to The Hackathon Are:
- Crossing the Pond Hack
- Do Not Track Hack
- Surveillance & Anti-Surveillance
- Transparency Hacks
- Revenge Porn Hack
Prizes will be Awarded!
- 1st Prize: $1000
- 2nd Prize: $500
- 3rd Prize: $250
There are pre-hackathon projects and activities. Join the Hackerleague to participate in these efforts and list your hack:
- A Consent Legal Map & Schema Project to create a legal map of the consent laws as a legal hackers tool for the event and projects posted at the event (many volunteers needed)
- Brainstorming List of Hacks - Add your ideas
- Share Tech and Links Page – Share your Knowledge
- Hacks (Project) Page – Propose or Join a project
- IRC Channel for Discussion
Sponsorship Is Available & Needed
Any organization or company seeking to show active support for data privacy and privacy technologies is invited to get involved.
- Sponsor: prizes, food and event costs by becoming a Platinum, Gold or Silver Sponsor
- Participate: at the event by leading or joining a privacy hack project
- Mentor: projects or topics that arise for teams, and share your expertise.
Contact NYC sponsorship: Phil Weiss email or @philwdjjd
Contact Bay Area sponsorship: Mary Hodder – Hodder (at) gmail (dot) com - Phone: 510 701 1975
Contact London sponsorship: Mark Lizar – Mark (dot) Lizar (at)gmail (dot) com - Phone: +44 02081237426 - @smarthart
June 16, 2013
Thoughts About the Value of My Personal Data
Financial Times has a calculator for the value of your personal data. The numbers they use to calculate this are old, but even if the numbers were new and fresh, this is the wrong discussion.
I don't care that my data isn't worth that much on the open market or that in many ways, because my data leaks everywhere constantly and therefore many can aggregate and sell it, the market is commoditized and my data is in this market, worth very little.
My data is worth a lot to me, and it's worth protecting to me (as in, I'm willing to go to a lot more trouble over just my slice of data, than any of these companies are to protect *my* data).
In this way, the tragedy of the commons (the personal data aggregation commons) may be turned around from the old version, where individuals didn't do anything about the commons but those with monetary or other big interest cared about protecting something did take action (think , but my single interest in copyright law might not be worth my spending a lot of time on the other side, fighting their lobbying efforts, because to the average person, big copyright isn't that big a deal.. hence, the tragedy of the copyright commons). The shift in the personal data commons that we have now, where companies just hoover up everything in order to sell your commoditized data reflects a situation where the individual is highly motivated to protect their little mini-garden slice of their own data, to control the inputs and outputs, if the proper tools are in place to help us do it.
I think the FT calculator reflects the tragedy of the personal data commons model where Big Personal Data Aggregators attempt to sell our data in a commodity market, typically for a few cents, to less than a buck (I came in at $ .9792 or just under a dollar -- but over what period.. I don't know. Is this for each request for my data? That could be a lot of dollars over a year, I suppose).
If I stop some of my data going to the big aggregators, I can't imagine they would notice or really care, if one person has some data missing from their profile, within the gigantic aggregation system. But my little garden, well tended and organized, becomes much more valuable to me than $1 a hit. Now if someone wants the well tended accurate stuff, fully fleshed out, they will have to "pay" a lot, or a little for a small slice. That payment may come in the form of a trade, a discount, or a better deal, if I'm buying, or the ability to, say, read the whole New York Times site unencumbered if I share my data with them. Or it may be that I just don't share.. pay cash for what I use online, and then I'm much less a part of the commons, as my data isn't shared out in the marketplace.
But now you see, I've created choice for myself, control, autonomy, and transparency over my transactions.
I think folks at the VRM list, and in a few other places looking at this problem. know that it's my little garden that is well tended that will be far more valuable over time, against the old style, hoovered commodity world. But for now, all the FT can see is the old model. Rear view mirror. And that's fine. Just more motivation to bring the tools online for me to collect and organize my own info, and stop the leaks of our data, from getting to the big hoovering agents.
Also.. T.Rob has a great post that also reacts to the FT article -- he too rejects the premise of the argument FT makes: "The personal data to which the FT article refers is like crude oil. The personal data which we should be worried about is like premium unleaded gas. Either way, it's about you, directly impacts you and has market value to everyone but you. Don't let anyone tell you it has no value. Even the Financial Times."
February 08, 2012
SOPAPIPA: Why we need to consider Compulsory Licensing Once Again
Paul Tassi over at Forbes has a great article titled You Will Never Kill Piracy, and Piracy Will Never Kill You. He talks about now Hollywood is trying to drive Netflix out of business by increasing the fees they receive, when in fact Netflix is the lifeboat Hollywood needs.
But Tassi isn't going far enough, I believe, in looking at Netflix as an example of a Silicon Valley lifeboat for Hollywood. Netflix is a microcosm of what could happen, across the internet and all users, if we looked at compulsory licensing for all media and users, and not just Netflix customers. Netflix is a great model for what could exist across the internet.
Denise Howell invited me to This Week in Law (TWiL 146: Mary Hodder and the Lifeboat of Fire) and of course, the SOPA PIPA thing came up.. and I referred to Terry Fisher's Compulsory Licensing ideas (though several others had other versions of compulsory licensing too...). He was at the Berkman center at the time, and still is, and lots of folks commented (like Ed Felton, Ernie Miller and Derek Slater back in the day ...this link goes to a page listing a year's worth of CL discussion in 2003).
At the time, in 2003, I advocated against compulsory licensing, in favor of a P2P system that would pay artists and end the copyright wars from Hollywood. Well, that was wishful thinking and never happened, and in the meantime, we have loads of Hollywood payola flooding WDC looking for even more draconian laws than what we have now, which will be quite harmful to the internet as an ecosystem.
So as the world has shifted over the past 10 years, I realize we need to revisit compulsory licensing, with built in privacy so we maintain our "right to read anonymously" (per Julie Cohen.. an amazing thinker) and deal with other issues like counting, watermarks and tracking (guess what, 10 years later, we all realize that thousands are tracking everything we *each* do online everyday.. so while I want my clickstream, etc to be private and user-controlled, I'm less concerned about this now as far as compulsory licensing is concerned than I was in 2003).
So my thought is, why not collect a fee at the front end of each month, across internet service points, from users. If no one uses any media, the funds stay put in escrow with the ISP and non-users don't pay. But if media is used in a given month, downloaded, etc, moneys are distributed to copyright holders. And if works are in the public domain? No payments would go out either. Yes, it would require a giant copyright registry, and ISPs to track (let's say, for 90 days, before dumping a user's media list) what anyone on an ISP provided connection used, in order to distribute fees. And it would require a giant fight in Hollywood about who gets paid what, for what, at what time, etc. Hey, maybe that will mean you can watch a first release movie on opening day, on your ipad, where a larger share goes to that copyright holder because of the timing of your consumption?
In my view, figuring out how to solve the Hollywood problem with compulsory licensing is worth doing, by getting all the smart people who understand networks, and licensing, and all the other hairy stuff that will come up in a room and working it out. It would get artists paid, and it would get the users whatever they want in terms of media, and it would get Hollywood into the lifeboat that Silicon Valley offers, finally.
May 29, 2011
Discussion: Building for a Personal Data Ecosystem - A Case Study
Just left the Quantified Self conference where I led a session in the last breakout on "building for a personal data ecosystem." Since we weren't on the official program, i was very happy to be holding something in an Infinity session. Fifteen or so people came, and I talked about Personal Data Ecosystem Consortium and our mission for a user centric data model where user's control their data through agents, or Personal Data Stores. I also mentioned what I was seeing at the event, which was lots of folks building apps, making new silos of data, and repeating the model where users' data is in question as to who owns it, and users don't really have access to their data except through the a service's website and possibly an API that might send a little data somewhere else (like twitter or facebook).
I suggested that in a Personal Data Ecosystem, apps makers could take data from their users and send it straight through to the users' Personal Data Stores (PDS). That way if the app or hardware changed or ceased to support their old systems, the user would have their old data to play with in their PDS. And I talked about open formats for the data (think.. what about an open format for Heart Monitor data, where you pulse is described and you can take that data anywhere). Services could think about just providing a great service, instead of trying to manage all the user data storage and security. Users would control their data in their Personal Data Stores/Lockers/Banks, and I said that a bunch of companies were building these PDSs, including Sing.ly which is building the Locker Project.
Sing.ly happened to have someone there, Jared Hansen, who is a developer in the open source project. And there was a guy from Basis, Bashir, who is building hardware (like a wristwatch) that you monitor things like your heartrate with.. though it does monitor many other things as well on your body. We also had a couple of health researchers there, plus other health and wellness companies looking at data, as well as Ian Li, of Carnegie Mellon who is researching data collection and normalization, and a woman from the EFF. And we had a couple of users who talked about what users need.
After a few minutes, Bashir from Basis explained their dilemma around the hardware which isn't all that profitable for them. So initially they were questioning what to do with the data and how to monitize the company. Should they sell the data, or give it to users, or charge uses for it, or give it away to developers who could create a great ecosystem by building lots of apps, thus driving more sales? And who's data is it?
So we were off an running, with the impromptu Basis use case of how to get the value of the data, include the user and let the user have choice and autonomy, and how to leverage what is being done out in the marketplace and with developers creativity with data. Oh.. and don't forget about participating in microformats and Activity Streams creation to make bottom up grass-roots standards for the data formats and exchanges.
We talked through what it would mean to give away the data, support users and ask them if they wanted their data included in studies, get additional revenue for Basis while maintaining the inclusion of the user in the process and what developers could and should do. We brainstormed a lot of things, and covered the good and bad points of how it would all work and how to support Basis' market model while still being good and fair to the users.
I have no idea what Basis will do, but I would love it if they would join the Personal Data Ecosystem Consortium in the Startup Circle, to help build out ways to make a user centric data system for user's wellness data collected with Basis hardware.
What an amazing opportunity Basis has for doing the right thing for users, and leading the wellness and personal data ecosystem by creating a win-win for themselves and users. They could create a new market for wellness data, that is user driven.
Frankly, we need more discussions like this. It's not about Do Not Track models where we kill all the data plus the value of it, and it's not about "business as usual" where the user isn't included and businesses do whatever they want with user data.
It's about creating markets that do right by users and have companies making money ethically and conversing with us in the market.
Thanks to everyone who came! We had many representatives of the relevant stakeholders and the discussion was enlightening and rare.. but one I hope to make more common in the near future!
May 28, 2011
Where is the Personal Data Awareness? And what are the Missed Opportunities at QS2011
I'm at the Quantified Self Conference in Mountain View today and tomorrow.
A few thoughts. There are lots of people here from various disciplines: health care, tech companies like 23andme.com that marry personal genomics and tech, apps makers and health and wellness hardware makers. And lots of folks just wanting to track themselves.
Sessions are preprogrammed (in other words, the conference is all done top down broadcast mode), and now and then in people's statements, a person will pass along the vibe of the old style medical industry (that is: we know more than you and we'll tell you what's true.. that mode was in the opening session where we were lectured to). Though I just walked through all the sessions in round 1 and the individual break out sessions are more discussion mode which is great to see.
There was a near complete lack of consciousness about protecting user's data as I walked in and spent a few minutes in each of the first 6 sessions. The impicit assumption was that "we" (builders, companies, etc) can take data and use it for whatever "we" want. Building systems that aren't just about more silos with data lock-in, or building for a Personal Data Ecosystem model where users keep their own archives and data, and then choose where their data goes, what purpose it's used for and control what is happening isn't on the radar. It is especially important that we look at issues of privacy, control, autonomy, choice and transparency for the highly personal, very sensitive data collected around personal wellness and health.
There is a single session, led by lawyers about privacy in round 2. But the rest of the sessions do not seem to be aware at all that they need to build from concept on for privacy, data control by the users, where users keep their data and the applications, devices and monitoring tools "use" the data with permission.
And there is no session about personal data control, where the QS apps would work on a Personal Data Store. I've asked to have one.. but we'll see if they decide to let me do it. The assumption is developers will just build more silos with more data collected, about you, crossed with other data about you, that after combined, creates yet another silo of data. There may be an API available, but effectively, the data is stuck in another silo, that a regular user can't really get at it, hold it, control it, share it, correct it or delete it.
It's dismal.. thinking about how all this highly personal data is just assumed to be owned by apps makers and companies and users are just cows in a big milking system. The participants of QS are just continuing the tradition started by the health industry and continued by tech company silos in making the users say "Moo." Pick your ecosystem and prepare to be milked.
Lastly, I'm really happy to report that the QS organizers decided to order a really healthy vegetable lunch salad (with either chicken or tofu on it).. Great work on that front!
May 13, 2011
McKinsey's Research Arm Claims Big Data Mining Will Save Us All
Steve Lohr has a write up in today's NYTimes: Mining of Raw Data May Bring a Surge of Innovation about McKinsey & Company's report on Big Data: The Next Frontier for Innovation, Competition and Productivity.
I think we need to challenge assumptions about the inputs... compare the inputs from "hoovered" personal data to that of what people assemble in personal data stores operating in a Personal Data Ecosystem.
Execs from Rapleaf and Intellius have admitted publicly, recently, that they know half their data is bad, they don't know which half. I also sat recently with the woman from Experian who is in charge of segregating and keeping separate data from the internet (verses financial data which is regulated) for their offerings about users. When I posited that a lot of her data was likely wrong, she agreed.
User's obscure their data intentionally because they are scared.
For myself, I can tell you that in the last few years, I have obscured data online (birthdate, zip code, name, address, phone number, preferences, email addresses) as well as health info (not to my doctors, but to data collectors whom I do not trust yet claim they never share the data. For example, you can't get a mammogram in SF / Children's Hosp without sharing a huge amount of very personal data.. so i made it all fake because I don't trust the lab and who they sell the data to...). And I fake it to the pharmacy when they ask for more than my basic info to fill a prescription. In fact my current insurance company has my name and birthdate a little wrong and i'm not correcting them.. because it makes it harder to aggregate my data across systems. Oh.. and my bank spells my name: Hoddler .. and has a slightly incorrect address (don't you love how they key in the wrong data!) and i'm not correcting that either.
I fake all sorts of stuff on and offline... I fail to correct bad data... I know many others do too.. I have since 1994 been faking my data online. Somehow even then, without understanding the privacy issues or how the internet worked then, I just didn't trust the system because I knew then we had no privacy protection in this country (US). As I began working with online technology in 1997, and started really understanding it, I've felt more than ever the need to obscure my data and make it difficult to combine in a pivot about me.
I get that this security by obscurity and mistakes doesn't cut it, but it's the best I can do right now.
So my question for the McKinsey research people is: have they factored this in?
And have they factored in that users have obscured enough information that me at one site cannot be aggregated with me at another site?
Or have they factored in that the people at institutions who key in the data from our driver's licenses get it wrong (my bank with my name and address) or the insurance co (my application correctly filled out.. with my name and DOB) or whatever?
The answer is to give us proper protections for our data. 4th amendment protections and rights over sharing of our data, so that we make sure the data is right. We can aggregate our own data in Personal Data Stores. Then we can trade fairly for that data if we agree to being included in the big data systems McKinsey is saying will help us so much.
I agree big data analytics can help us as a society, but not without good data, and not without including users into the system, as equitable players who deserve to have rights over our data, including choice and autonomy to participate in big data systems.
But until then.. big data is working with databases that are half right.. because we don't have choice, autonomy, rights or protections as users, and that's the first problem with McKinsey's assumptions.
April 29, 2011
Tracking Do Not Track at Morris + King
A bit of Context
Obviously, this diagram is a little cynical (courtesy of Chinagrrrl), but not too far off from how we manage personal data online today. But there are a lot of proposals on the table to fix this dilemma. One is Do Not Track which industry sees as something they can self-impose on an *opt-in* basis (for themselves) and opt-out (for the users) and self-regulate by having advertising trade org.s monitor compliance, with the FTC stepping in as necessary. There are also a number of DNT bills introduced in Congress and various hearings on tracking where the FTC would regulate implementation. And Johns Kerry and McCain have introduce a Rights and Responsibilities proposal in the Senate, that instead of Do Not Track (Kerry's LA, Danny Sepulveda told me DNT is a waste of time) suggest ways that data collectors would have to be responsible with our data. However, that bill lets 3rd party marketing, data tracking and Facebook's privacy bending ways totally off the hook. Both of these plans / legislative initiatives completely ignore the more than 40 startups and companies building for the
That said, the rest of this post describes the Tracking DNT panel at Morris + King the other night.
Tracking Do Not Track
Tuesday night I was on a panel at Morris + King, an PR firm in NYC, called Tracking Do Not Track. Our hosts: Andy Morris and Dawn Barber (who co-founded NY Tech Meetup with Scott Heifferman) were very good about putting together a diverse group of people to talk about Do Not Track and the various issues with personal data and the advertising industry that have so many talking these days. My guesstimate was that about 100 people attended, mostly from industry (tech & advertising).
Our group included:
Brian Morrisey (Editor in Chief of Digiday, an ad industry trade publication) as Moderator
David Norris (CEO of Blue Cava)
Dan Jaffe (Exec VP, Govt Relations for the Assoc of National Advertisers - ANA)
Helen Nissenbaum, Professor, Media, Culture & Communication at New York University
and me: Chair of the Personal Data Ecosystem Consortium
We started off with Brian's question: who are you, what do you do in a nutshell, and what do you think of the state of online privacy these days?
I was first.. and gave a quick explanation of PDEC which is to say that we offer a middle way between Do Not Track (DNT) and what is going on now online (Business as Usual). Our middle way offers a market solution to users' wanting control of their data, and the tracking and digital dossier building by shadowy companies to stop..we don't believe DNT will work and don't support it, though we do see that some kind of "Rights and Responsibilities" legislation would help create a level playing field for any company that collects personal data. Those rights and responsibilities for personal data collectors needs to include giving user's a copy of their data, so they can then put them into personal data stores (or banks, lockers, etc) and then use the data as the person sees fit.
Oh, and I said the state of online privacy was pretty dismal, though I was optimistic because it feels like this year, it's actually possible to get personal data some basic protections similar to HIPPA or FCRA where user's can get their data, and we can make the Personal Data Ecosystem emerge as a market solution that finally works for people. Granted, it's a 5-7 year proposition to really create a new market, but we can actually start this year because of the 40 or so startups that are funded and building pieces of the PDE and the push in the US Government to do something about the dismalness of online privacy.
Helen Nissenbaum, whom I've admired for years for her thoughtful approach to privacy and usability, agreed that privacy online was pretty bad, and explained her work around Adnostic, a "privacy preserving targeted advertising" system made with some Stanford folks.
By far, the best comment Helen made all night was that tracking and aggregating data that pivots on people is not ethical, that it's bad for people and for the incremental 1% improvement we might see in targeted advertising, it's not worth the incredible intrusiveness of tracking. In particular she said, "Anonymization does not change intrusiveness."
Dan Jaffe spoke next, and surprise, agreed that online privacy is not good, but talked about how publishers need to support their businesses and that behavioral advertising is helping them do it, and that Do Not Track should be self-regulated by the industry because they know their business best. And government has a tendency to screw up regulations and therefore, we should let advertisers figure out what works.
Next up was David Norris, who agreed with my use of the word, "dismal" to describe online privacy and said that Blue Cava was supporting a self-regulatory model because they didn't feel that Do Not Track as proposed for legislation was a good idea.
We chatted about the viability of Do Not Track, and with Norris, Jaffe and me all agreeing it wasn't a good idea. However Jaffe said he didn't like the idea of any regulation, that the industry could do it themselves, and that my "data rights and responsibilities" support for legislation would be just as bad for data collectors.
Folks in the audience, like Esther Dyson, pushed back on Jaffe, saying that she wanted the ability to choose where and when her data was out at some vendors site, and that's why, she said, "I'm supporting Mary and her organization" because it's a market model that gave her choice.
I was very pleased to hear her endorse us (thank you Esther!)
In the end, I think we got our message out which is that tracking individuals is a bad thing, that users should be the only ones tracking themselves across sites, but that sites can track within the site to optimize business. And that users should have a marketplace to trade data, like they do in mileage accounts, and choose when they trade, as partners, and not have it done for them in secret as is the case now. And that we want to see users data protected with a basic set of rights, like Health, Education and Financial data currently is now.
Curiously, Dan Jaffe made a comment about HIPPA, the health data protection law, suggesting that users get their health data so maybe they could get their personal data too. Given that that is a law, and he was opposed to regulation of any sort otherwise, I wasn't sure what to make of this.
However, I was really pleased with the opportunity to talk about PDEC, the startups and tech efforts to create a personal data ecosystem, and to provide a different view than the usual support for Do Not Track as we try to figure out what is best for our society.
Thanks Andy and Dawn for inviting me!
March 12, 2011
The right to oblivion
Yesterday at this NCUA ICANN meeting in SF the right to oblivion was mentioned several times. It seems to be on people's minds as they try to figure out what privacy and data control mean to companies, to users, to privacy advocates and regulators.
Peter Fleischer who is Google's Global Privacy Counsel wrote a post on this topic: "Foggy Thinking about the Right to Oblivion" and I think he missed something very important in the discussion where people want to be "let alone." He mostly focuses on explicit data, the kind that user's put out there knowingly. But there is also implicit data, that users expect will stay within a website, and yet doesn't.
So I left this comment, but wanted to post it here as well:
I think you are missing an important distinction. There is data a user puts on the web: a facebook comment, a tweet, a flickr photo, etc. And there is data the user didn't expect to go anywhere except stay with the business they do or did business with:
* geolocation logs from one's mobile carrier
* purchases made with a vendor
* financial statements and the various actions one takes with bill pay, online banking and financial organization
* search activity logs
* an email address given to Facebook to be used as a login
Or Facebook gives your email address to Rapleaf who matches it with activities all over the web. You have no idea, nor did you expect this.
Or you search on your mom's medical condition and now the beacons have transmitted the info to advertisers and pharmaceutical companies.
And you thought deleting your cookies would help. A complete waste of time now with flash cookies, beacons and fingerprinting of your computer.
What I think user's want is the right to control their own data. The right to ask that it be deleted after a period. The right to correct it if something is wrong, and the right to hold it, so they may store it in a personal data store (PDS).
And why, you ask, would anyone use a PDS? Well.. do you use Mint, or Dopplr, or Trippit, or have a mileage account? For that last one, you can get amazing things like free hotel room or plane tickets or even goods like flowers. We already use personal data stores now.. just very primitive ones. And we want the ability to trade our data because we might get a free book or discounted things. Those markets are yet to be sorted out.. but the apps to make that work are coming.
There is a lot to work out here, but there is a Personal Data Ecosystem coming.. companies are building for it, and frankly, we do need a little regulatory help on the side to support user's rights to their data.
And to keep sites, like the examples above, from sending your data off site through beacons and trackers or other data agreements. Instead, Ad companies should be sending websites a black box to process user data internally, and then pick relevant ads, so that sites never have their user's data leaving the site for any reason, unless the user takes it to their PDS.
It's the right thing to do for people.
February 19, 2011
PDEC Response to the FTC Do Not Track White Paper
Here is a link to the Personal Data Ecosystem Consortium response (pdf) I submitted late last night to the FTC about their Do Not Track white paper (pdf).
I got the letter and Q&A to the FTC (33 pages!) just in the nick of time as submission "00472"... at just about 9pm PST on February 18, after which the FTC shut down the submission site. You can see other submissions here but for now, nothing submitted last week is actually listed. Check back early next week for updates and the PDEC submission.
After working on this for 3 weeks off and on, between other endeavors, it's a huge relief to get it off.
Now the real work begins!
February 07, 2011
Speaking of Speaking.. the Personal Data Ecosystem Emerges
The last two weeks I've been speaking a lot. Why?
On 1/28/11 I was at She's Geeky SF leading a session with Kaliya Hamlin, Executive Director of Personal Data Ecosystem, where about 50 women came to talk about what this emerging organization and space are all about, and hear about what Kaliya Hamlin and I were submitting to the Department of Commerce in response to their Green Paper. On 1/3/11 I was at BigDataCamp 2011 (the night before O'Reilly's Strata) in Santa Clara, to lead a session on Personal Data Ecosystems. And on 2/3/11, I was on a panel called CRM versus VRM: Who Controls the Conversation at the Conversational Commerce Summit in SF. Also talking about the Personal Data Ecosystem.
Why all this talking? Well.. as I mentioned Kaliya Hamlin and I have submitted a response to the Department of Commerce Green Paper where they asked for comments about the FTC's Do Not Track proposal and options for how to protect user privacy and conduct secure logins, while still engaging in what the DOC does.. which is advise Congress on how to promote commerce in the Union.
I'm the Chair of the Board of Personal Data Ecosystem Consortium.
And I'm currently writing a response to the FTC's Do Not Track proposal.
Why all this work? Well.. I think the two extremes of on the one hand: shutting down tracking, or on the other: allowing a sort of "business as usual" stance for the intense tracking that goes on as we traverse the web, use our cell phones and generally act through digital mediums aren't the answers. We do need to dramatically alter what is happening, but not shut down the data.
Why? Instead of do not track, I want there a systems where *only I can self tracking*. Because I am the *only* ethical integration point of data about me.
Can you imagine if we did a "do not track" in 1979 when Airline Mileage Programs were just getting started? People have benefited enormously from them.. to the tune, per the Economist in 2005, of $700 billion in benefits. People want some self tracking, if they get something of value. They may want their histories private, but able to share a score or a piece of it, when they want. Because our data is gold. And we deserve to benefit from it.
We need to track ourselves, but only if we want to. And there needs to be no tracking of us, across sites, if we don't want it. But if we do, we need the ability to take our data, aggregate it, and trade it for goods. And to correct it, or delete it Like free plane tickets. And a lot of other things I think we can't imagine now. Because the Personal Data Ecosystem, and things like Vendor Relationship Management are just getting started.
We need to limit the surreptitious stalking of ourselves across digital platforms and sites by others, and take back the ownership of our own data, to be aggregated, deleted and managed only by the individual. And traded when we want to in a marketplace. And we need 4th Amendment protection for our personal data stores.
And we need marketplaces, much like the Mileage marketplaces, that allow us to trade our information, we need Personal Data Services that will store our data, make it portable, so that we can move our data when we want to (think taking your money from one bank and putting it into another) and we need an applications market for developers to do creative and interesting things with our data.
January 12, 2010
Information Technology meets Medical: Why We Should All Be a Little Worried
So, here's the scoop.
In calling into the doctor's office, I got their voice system which has always required lots of number punching to finally get through to someone to make an appointment. It's better than 10 years ago where you could literally never talk to anyone in their offices and would just punch numbers endlessly until leaving them a message. That would be followed by a return call that you would invariably miss, having to start the process over, to get another call back.. all to just make an appointment.
Anyway, calling in today only requires two selections, before being told my call was in line to be picked up after approximately 6 minutes of estimated wait, OR I could use their online system. Whooppee! I could make an appointment using what I imagined was a calendar with available timeslots to book appointments? So here is Golden Gate Obstetrics (GGObgyn) big chance to show how they are using information technology to help people organize this process of getting an appointment better and faster!
Er... NOT. So. Fast.
The branding all over the site is "Golden Gate Obstetrics" so I'm thinking: okay, this is their site, even though it's got some other root domain name (mymedfusion.com).. in other words, Golden Gate Obstetrics is responsible for my health info, and I just need to get in to see their calendar and choose a time or something. So I go to "create an account" (Note below I've made screen shots of the *second* account I made, called 'testacct' to see what was going on a second time.. since the first time when I made an account for myself, it went by quickly and I wasn't suspicious until the end of the very end of the process):
As you can see, there's enough data request there for someone to do some damage if they wanted to. At this point I was getting a little concerned about where this data was going, but keeping in mind GGObgyn's history where getting staff on the phone to make appointments is so difficult, I went ahead and submitted my data.
The screen instantly took me to a logged in state, saying "we are now your Health Record provider" which I found totally freaky. I don't want them to be my Health Record provider. I just want to schedule an appointment. All this, without requesting any sort of email verification or other checking... just gave me an account. At that point, I could go make an appointment:
To say the least, I was shocked. So I just put in all this personal information, dinked around with forms etc, to be given a glorified email form to request an appointment? With structured data about which day of the week I want the appointment? How about a calendar with available time slots? So I could just pick based upon my availability? No... it appears they are going to email me back or call me with times so we could go back and forth over schedules again, in email? Really? This is the promise of information technology for scheduling? I mean aside from the privacy issues, I really felt like I'd been had in terms of my time sink for their silly email form.
I notice there is no help or privacy statement on any of the pages in their system (and I clicked on all of them), and the "ask a question" page is all about medical stuff, not using the website. But I figure GGObgyn is responsible for this site. So I call them, and after a lengthy wait, get the appointment receptionist. And I ask, where did my data go? And she says she doesn't know, but they own the site, so therefore my data is safe.
ME: "Really? because my account approval seemed instantaneously to happen on my screen."
Olivia: "Oh yes.. I did that."
ME: "Wow.. you're fast."
After that, she could only talk about how to use the system from her perspective, not mine. In other words, Olivia had no idea what regular users face (ie, There is no privacy information, as I typed in my personal data, and no real idea other than from reading the URL in the address bar that maybe a third party was collecting my data, etc. Reading address bar URLs is something most users don't do.)
I told Olivia she literally wasn't getting the problem, because she just kept repeating to me how she uses the system (as an administrator over user accounts and for appointments where, I'm guessing, she has to be seeing an administrator version of the Medfusion system or some kind of much more powerful interface than the one regular users see when they log into the system). So she said she wanted to pass me to their office manager, Laura, who said, as she picked up the call:
"Mary, i've been listening to your call with Olivia" ... er.. okay.. no one disclosed to me that my call with Olivia was going to be monitored by others listening in. Unsettling. And possibly illegal. But whatever, that's really the least of my concerns here.
I told Laura there was no disclosure to me in advance of having a third party get my personal data.. and after Medfusion had it, I had no way of finding out what they are going to do with it.
Laura replied, "Well I can't help you anymore, because this is a waste of our time.. if you didn't want to put your information into MedFusion then you shouldn't have."
ME: "But your voice system told me to. And your name is on the website, and you aren't really disclosing that you are giving my data to a third party, MedFusion or telling me what they or you are going to do with it."
ME: "But I don't have a fax machine. Can't you email it?"
Laura: "No.. maybe i could scan it and send it in email, but I'm not sure... and there isn't anything else I can do anyway." (It was clear she was trying to end the call.)
ME: "Er... Okay." (And then I hung up.)
No help or contact pages appeared afterward.
By law, we must abide by the terms of this Notice of Privacy Practices. We reserve the right to change this notice at any time as allowed by law. If we change this Notice, the new privacy practices will apply to your health information that we already have as well as to such information that we may generate in the future. If we change our Notice of Privacy Practices, we will post the new notice in our Center, have copies available in our office and post it on our website.
And then, under COMPLAINTS:
If you think that we have not properly respected the privacy of your health information, you are free to complain to us or to the U.S. Department of Health and Human Services, Office for Civil Rights. We will not retaliate against you if you make a complaint. If you want to complain to us, send a written complaint to the contact person at the address shown at the beginning of this Notice. If you prefer, you can discuss your complaint in person or by phone.
I would also recommend that businesses like Golden Gate Obstetrics use the FTC page on Protecting their user's data and privacy (additionally, here is a link to the FTC's newer site how individual's can protect their own data) which is very helpful when trying to figure out how to present privacy info on a website.
Frankly, I have no way to alert anyone at GGObgyn to this blog post, or to my thoughts on the subject, other than to call back, sit on hold, and talk with the three people I already discussed this with, who were ranged from unhelpful to hostile. Since GGObgyn doesn't seem open to discussing their websites problems and the fact that the cat is kind of out of the bag now with my data going God knows where into various company's hands, I'm posting this example of how companies, particularly *medical* entities, with no experience or understanding of information technology systems and websites need to use extreme care, and not assume that office staff trained to run a medical office has any idea what users need or will face with a website collecting personal or medical data.
I hope people at medical or other data collection companies will realize the importance of protecting user data and being straight with us about what's happening to personal and medical information. My experience is just one, but if this becomes representative of people's experience with their medical providers, we ought to be very worried.
Note: I took a look, when writing this post, at ratings for Dr. Wiggins, whom I really like and have enjoyed having as my doctor. You can see from the ratings at Health Grades that Dr. Wiggins is well liked by patients but the appointment system and her office staff.. not so much. I hope GGObrgn does an overhaul on all their office administration and website that interacts with patients before they venture further with information technology as tool for communications.
March 19, 2009
The Life of a Tweet
Twitter (and the ISchool -- or one of my poor brethern -- I have a masters from UCBerkeley's iSchool) seem to be in the tweetsphere over one ill-found tweet tossed off by a student and found by her summer internship employer likely via search.twitter.com. For background, you can see this: FattyCisco.com. The poor girl is likely humiliated and horrified over what she thought was an innocent and also, likely, a fleeting thought that didn't really reflect how she felt overall.
We've all had those momentary thoughts where when we are ambivalent, we toss something out of our mouths and once it's out there, we think, wow, that doesn't even ring true or, it did for a nanosecond, and now it's changed, or gee, that's about 5% of the way I actually feel about this. But out of mouth, truly ephemeral (unless recorded in some form) is different than written down and searchable in the grand database of the Googlezon and search at twitter. Or maybe it's just a joke.
This is one of the problems with online communities and specifically twitter:
You don't know who's listening, and because of search tools, you are findable beyond your follower list
or your "community" of known tweeters (ppl you @ with or read) unless your account is private.
I don't think we have at all sussed out what it means to tweet in the long term, or what the power of the tweet is, or where the tweet goes and what sort of life it has beyond the first few minutes or hours of it's life in the Twitter / client context.
This is another example of something that happened recently:
A PR exec going to Memphis to meet with a client, Fed Ex, insulted the client on the way to the meeting. The clients wrote a letter to the PR company and him, his bosses, and cc'd everyone at Fed Ex as well. Ooops.
The problem is, tweets go to those paying attention at the moment, those who may save tweets in clients (i leave my twitter client open and check it now and then as I have time -- right now I have 15k tweets from the past couple of days), those pivoting on a single user, those searching for key words, those looking a related conversations.
But when you tweet, in your head, you're often just thinking about those you expect to read it, like only a few your followers paying attention at the time. What happens with some tweets (some reading by some followers) is not what can happen with all tweets.
The interface and interaction at Twitter's website doesn't lead you to believe that what happens most often there will happen in incendiary examples. And different twitter clients (an android or Iphone app for example) don't lead you to understand the permanent nature of tweets, through use, that say, search.twitter.com might, as you see something you deleted appear there anyway.
It takes experience with all these different modalities to inform you because there is no advance disclosure or warning of the elasticity of a single tweet.
What is most interesting is this pushes me to think harder about what the interface of "aged information" online looks like (and I don't mean google search results that move from page 1 to page 3 over time).
And I have to ask myself what it would mean to have what Judith Donath discussed on the panel, Is Privacy Dead or Just Very Confused, moderated Saturday at SXSW by danah boyd. Judith discussed having some kind of a "mirror" for you of your digital self that would reflect all your online presentation and communications and expression... just so you might get a sense of what you show people and what you project at a moment in time. Right now it's really hard to gather that sense of yourself. Right now, you don't really see it in any sort of complete way. But others see pieces of you digitally represented at different times. It would be like re-disclosing for yourself what you've done, discovering how others view you, in slices or on the whole, in order to see the effect you have. It would probably be helpful to know what had reach and where, and what was for now at least, forgotten.
But frankly, the privacy implications of that are huge as well. So, I'm thinking. No answers on that one yet.
January 28, 2009
Happy Data Privacy Day!
Apparently, last night the US House of Representatives passed HR 31 declaring January 28, 2009 National Data Privacy Day. 402 votes in favor, none opposed. Jolynn Dellinger of Intel Corporation, working with Congressman David Price and Congressman Stearns, spearheaded the effort.
More info for today's events at The Privacy Association.