Join us on Thursday 19 September for the Hogan Lovells Privacy and Cybersecurity KnowledgeShare in London. We will share our latest thinking on the key privacy and cybersecurity issues faced by those with data protection responsibilities within organisations. Our all-day event will cover a lot of ground through incisive quick-fire presentations, Q&A panels and hands-on workshops.
On May 1, 2019, the National Institute of Standards and Technology (NIST) announced a Request for Information (RFI) in the Federal Register regarding ongoing efforts to develop technical standards for artificial intelligence (AI) technologies and the identification of priority areas for federal involvement in AI standards-related activities. Responses to the RFI are due by May 31, 2019.
Join us in May as we will be speaking at the 2019 Global IAPP Summit, discussing hacking, privacy and cybersecurity and the TCPA. We hope you can join us.
The National Science Foundation is seeking public comment on US policy for artificial intelligence, according to the Federal Register Notice of Request for Information (RFI) filed in September 26, 2018. Specifically, the RFI requests input from the public as to whether the National Artificial Intelligence Research and Development Strategic Plan (AI Strategic Plan) should be updated or improved. Comments to the RFI are due to the National Science Foundation by October 26, 2018.
Nothing challenges the effectiveness of data protection law like technological innovation. You think you have cracked a technology neutral framework and then along comes the next evolutionary step in the chain to rock the boat. It happened with the cloud. It happened with social media, with mobile, with online behavioural targeting and with the Internet of Things. And from the combination of all of that, artificial intelligence is emerging as the new testing ground. 21st century artificial intelligence relies on machine learning, and machine learning relies on…? You guessed it: Data. Artificial intelligence is essentially about problem solving and for that we need data, as much data as possible. Against this background, data privacy and cybersecurity legal frameworks around the world are attempting to shape the use of that data in a way that achieves the best of all worlds: progress and protection for individuals. Is that realistically achievable?
As previously reported, on Thursday, March 9th, the Federal Trade Commission (FTC) hosted a forum on the consumer implications of recent developments in artificial intelligence (AI) and blockchain technologies. This is the second of two entries on the March 9th FinTech Forum and focuses on the discussions surrounding blockchain technologies, in which panelists reflected on the nascent stage of the technology, industry representatives expressed confusion over the applicability of current regulation, and regulators expressed a lack of clarity over jurisdictional questions.
On Thursday, March 9th, the Federal Trade Commission (FTC) hosted a forum on the consumer implications of recent developments in artificial intelligence (AI) and blockchain technologies. The FTC acknowledged the benefits of technological developments in AI and blockchain technologies, but stressed that advancements in these technologies must be coupled with an awareness of and active engagement in identifying and minimizing associated risks. This blog post focuses on the AI discussion, which addressed how the values of privacy, autonomy, and fairness are affected by the advent of AI systems as well as how to ensure safety and security in the development and deployment of individual and connected AI systems.
Please join us for our January 2017 Privacy and Cybersecurity Events.