Filter activities

On February 16, the TUM Think Tank hosted a fireside chat featuring Sir Nick Clegg, President of Global Affairs at Meta. In conversation with Urs Gasser, rector of the Munich School of Politics and Public Policy (HfP), Sir Nick Clegg shared his perspectives and insights on a diverse set of topics at the intersection of artificial intelligence (AI) and innovation, particularly from a European standpoint.

The wide-ranging discussion touched upon the transformative potential of AI technologies, dissecting their implications across various sectors both within Europe and globally. Sir Nick Clegg, a leading figure in the technological landscape, shed light on Europe's unique contributions and regulatory considerations concerning AI and the metaverse.

Here are a few of the key points made during the fireside chat:

1. The countries that will most benefit from AI technology are those that can deploy it quickly and effectively, not necessarily the ones that develop it. Geopolitical discussions about AI are moving from attempts to control access to recognizing the inevitability of its widespread adoption. This shift is exemplified by companies like Meta open-sourcing their large language models (LLMs), indicating a trend towards sharing technology to maximize its use, rather than gatekeeping it.

2. Realizing the full potential of AI technology requires international cooperation, ideally among techno-democracies like the EU, USA, and India. Despite political challenges and varying approaches to technology policy, collaboration on research and policy could significantly advance AI's positive impact, particularly in fields like health and climate change.

 

3. While AI's dual-use nature means it can be used for both beneficial and harmful purposes, particularly in generating realistic misinformation, ongoing efforts by tech companies to identify and label AI-generated content are crucial. Cooperation among major players to establish standards and responsibilities for AI-generated content can empower users to discern and mitigate misinformation.

4. The narrative that technology, including AI, is inherently detrimental to democracy is challenged by historical context and empirical research. Concerns about technology's impact are often exaggerated, and while it's essential to develop ethical guardrails alongside technological advancements, the relationship between technology and societal change is complex and not inherently negative.

5. In discussions about AI, we often sensationalize its dangers, treating scenarios like the Terminator as relevant and fearing AI's replacement of humans. This tendency stems from anthropomorphizing AI, attributing human-like qualities to it, resulting in misplaced concerns. Instead, AI should be viewed as a tool that excels in certain tasks, much like a fast-driving car. Moreover, there's a pattern where new technologies are exaggerated by both proponents and opponents, as seen historically with radio. Currently, AI's capabilities are overestimated, sparking moral panic and defensive regulations, detracting from the core question, which is how to effectively utilize it.

 

6. Companies like Meta, which are used by 4 billion people per day, bear significant responsibility, which they must acknowledge. We need guardrails that are not solely developed by tech companies, but are derived from the collaboration of government and society. It's not ideal that the development of guardrails occurs 20 years after the development of technology, like we see with Social Media. Ideally, this regulation process should happen concurrently.

This is what our attendees had to say on the event:

Sofie Schönborn, PhD Student at HfP:

„I am delighted to witness the diverse array of individuals who have found their way here today. Here, students converge with industry leaders from the technology sector, alongside scholars from TUM and forward-thinking figures from the public sphere. The TUM Think Tank emerges as a vibrant hub, a melting pot of ideas, and a diverse cohort of individuals committed to technology, society, and democracy. This is the place to have the conscious discussions and joint deliberation about societal and political implications of technologies, about responsibility and potential futures ahead of us... and to collaborate to enable human-centric technology co-creation and co-design!”

PhD Student at HfP:

“As a researcher, engaging with leading practitioners in the field has been hugely rewarding. It provides me with direct access to valuable first-hand information and proved helpful to complement empirics for my research during follow-ups with them. Personally, their careers inspire me and I am already looking forward to our next guests at the TUM Think Tank.”

Franziska Golibrzuch, Master Student at HfP:

“It was extremely insightful to listen to such an expert – Sir Nick Clegg gave us the industry perspective whilst having an extensive background in government. Especially in the case of AI and within the current debate about AI regulation, security etc. this has been a great opportunity for us TUM students. All in all, it was a super interesting event which finds many applications to my studies because it puts the intersection of technology and politics in the center of the discussion and sheds, time and again, light on the critical and important overlaps in innovation, society and public policy realm. Even after the fireside chat I had the chance to pose questions, which I appreciate a lot.”

Thank you to the Meta team, for making this fireside chat possible and to everyone who took part and asked thought-provoking questions.

Sir Nick Clegg is President, Global Affairs at Meta. He joined the company, then called Facebook, in 2018 after almost two decades in British and European public life. Prior to being elected to the UK Parliament in 2005, he worked in the European Commission and served for five years as a member of the European Parliament. He became leader of the Liberal Democrat party in 2007 and served as Deputy Prime Minister in the UK's first coalition government since the war, from 2010 to 2015. He has written two best-selling books, Politics: Between the Extremes and How to Stop Brexit (And Make Britain Great Again).

Artificial intelligence and quantum technologies - disruptive technologies that can change the world. On February 6, Bavarian State Minister Markus Blume visited the TUM Think Tank. On the occasion of the launch of the QuantWorld project, funded by the Federal Ministry of Education and Research (BMBF) with 1.9 million euros, the TUM Think Tank presented two of its projects to the Bavarian State Minister of Science and the Arts: the Quantum Social Lab and the Generative AI Taskforce. An in-depth insight into the projects of the TUM Think Tank, which not only drives innovation, but also embodies an innovative entity itself.

The Generative AI Taskforce promotes responsible innovation

"ChatGPT was the 'iPhone moment' of generative AI," explained Noha Lea Halim, who presented the TUM Think Tank's Generative AI Taskforce. Generative Artificial Intelligence (AI) - most notably ChatGPT - has fundamentally changed our technological landscape. The rapid market entry of these new technologies has created a tension between innovation and regulation. To navigate these questions, the TUM Think Tank launched the Generative AI Taskforce in April last year. "The task force here at the TUM Think Tank ensures a transfer of knowledge from universities to industry and the state and strengthens Bavaria's pioneering role in the global AI landscape," says Halim.

Quantum technologies: The social transformation of tomorrow - already in view today

"In keeping with the theme of the iPhone moment of artificial intelligence, we are still waiting for the so-called QDay in the field of quantum technology," explained Urs Gasser, Rector of the Munich School of Public Policy (HfP). "Even though traditional computing systems are still widely used today, quantum technologies are already here and have the potential to change the future forever."

This is why Urs Gasser and Fabienne Marco founded the Quantum Social Lab in September 2022 with the support of TUM President Thomas Hofmann. The Quantum Social Lab deals with the ethical, legal, social and technical challenges and opportunities that await us in the course of the further development of this technology. As part of this, the Lab will bring these new technologies closer to citizens with the help of artists and a participatory learning platform through the QuantWorld project, which is funded by the Federal Ministry of Education and Research (BMBF) to the tune of 1.9 million euros. In view of the expected disruptive effects of second-generation quantum technologies, the project is investigating specific future scenarios in the fields of medicine, banking and mobility. "We don't know what the future will look like with second-generation quantum technologies, but we shouldn't miss the opportunity to shape it," Fabienne Marco, head of the lab, concludes.

"A think tank in the best sense of the word and a real flagship for AI research and AI application in Bavaria: the TUM Think Tank's projects fit in perfectly with Bavaria's AI measures. The AI age and the coming quantum revolution bring with them ethical, regulatory and social challenges that we want to address at an early stage. The Quantum Social Lab and the Generative AI Taskforce are preparing citizens and decision-makers in Bavaria for the opportunities and social impact of these disruptive technologies. We are delighted that these important programs are being implemented in Bavaria and are therefore happy to continue supporting the research and projects at the TUM Think Tank!" say the Bavarian State Minister for Science and the Arts, Markus Blume. 

The Bavarian state government is actively committed to investing in key technologies, education, research, infrastructure, transfer and science as part of its high-tech agenda. The Minister's visit underlines the importance of innovation and collaboration between research institutions and public administration to push the frontiers of society and technological development. The Generative AI Taskforce and the Quantum Social Lab are just two examples of how social and technological transformation come together in the TUM Think Tank and at the HfP. We would like to thank the Minister of State for Science and the Arts, Markus Blume, for his visit and his great interest, and we look forward to further collaboration.

 

The research project "Using AI to Increase Resilience against Toxicity in Online Entertainment (ToxicAInment)", by Prof. Dr. Yannis Theocharis (Chair of Digital Governance), funded by the Bavarian Research Institute for Digital Transformation (BIDT), explores the spread of extremist, conspiratorial and misleading content on social media, investigating how this content is embedded through entertaining content. It aims to deepen the understanding of the impact of this content on user behavior by combining entertainment theories, visual communication and toxic language with AI methods. This project makes an important contribution to analyzing and combating online toxicity. More information can be found on the project page or in the BIDT press release.

 

After an intense three-day negotiation marathon, negotiators from the Council presidency and the European Parliament have successfully reached a provisional agreement on the proposal concerning standardized regulations for artificial intelligence (AI), known as the Artificial Intelligence Act. The objective of the draft regulation is to guarantee the safety of AI systems introduced to the European market and employed within the EU, with a commitment to upholding fundamental rights and EU values. This groundbreaking proposal additionally seeks to foster increased investment and innovation in the field of AI within Europe. Following this provisional agreement, efforts will persist at a technical level in the upcoming weeks to conclude the specifics of the new regulation. Once this work is completed, the presidency will present the compromise text to the representatives of the member states for endorsement. The comprehensive text will require confirmation from both institutions and undergo legal-linguistic revision before formal adoption by the co-legislators.

We asked members of our community for their insights on the AI Act. This is what they think:

Samson Esaias: Associate Professor of Law at BI Norwegian Business School, Faculty Associate at the Berkman Klein Center of Internet and Society at Harvard University 

"Since the Commission's April 2021 proposal, consensus has emerged within the bloc's legislative bodies on a risk-based approach to AI regulation, along with innovation-supporting measures like sandboxes. Debates have focused on sensitive issues such as biometric identification for law enforcement and regulation of General Purpose AI (GPAI). The Parliament advocated for stronger fundamental rights safeguards, while the Council favoured broader exemptions for the use of biometric identification for law enforcement. The Parliament's GPAI regulatory proposal also encountered resistance from the Council, partly because it focuses on the technology itself instead of the associated risks. Nonetheless, from the press-releases, the Parliament seems to have secured significant wins, including bans on biometric identification using sensitive data, internet photo scraping for facial recognition, restrictions on predictive policing, and mandatory rights impact assessments for high-risk systems. Similarly, despite strong resistance from some member states, the latest draft also includes important obligations on GPAI and systemic foundational models, though criteria for the latter may be overly stringent. This focus on GPAI regulation, initially absent from the Commission's draft, highlights the shift in priorities over the past two years. The question that remains is whether these additions will stay relevant in two years when the legislation comes into effect or if they will highlight the need for a rethink on how to regulate such rapidly evolving technologies."

 

Urs Gasser: Professor of Public Policy, Governance and Innovative Technology and serves as Rector of the Hochschule für Politik (HfP) and Dean of the TUM School of Social Sciences and Technology, Leader of the TUM Generative AI Taskforce

"The agreement on the AI Act marks an important milestone. Above all, it is a powerful political signal to the global community, showing that the EU lawmaking bodies are functional and can come up with meaningful guardrails in a complex and fast-moving normative field, with human rights and democratic values as lodestars. Whether the AI Act as a complex legal and regulatory intervention – at times resembling a Rorschach test – will live up to the high hopes expressed by its leading proponents remains to be seen. AI governance as a normative field comes with many unknowns. Perhaps the biggest challenge ahead is to learn continuously and manage both legal path dependencies and unintended consequences that often come with such ambitious legislative projects, as the recent history of law, technology, and society teaches us."

 

Noha Lea Halim: Doctoral Student and Research Assistant at the Professorship for Governance, Public Policy & Innovative Technologies at the TUM School of Governance, Assistant in the TUM Generative AI Taskforce

"The EU via its AI Act agreement brings forward a global benchmark regulatory proposal, representing most ambitious framework to date. By recognizing the need to not only regulate the economic but also the societal impacts of AI, it sets the tone for how AI might unravel in the future and implies far-reaching effects for the research and development of AI systems, in Europe and beyond.
The 12-24 months implementation phase will give a glimpse towards the long-term capability of the regulation to adapt to the technology’s disruptive potential as well as the Union’s ability for capacity building to bring the proposal to life.

The landmark proposal only marks the beginning of addressing AI’s future challenges, moving forward there will be many more to come."

 

Timo Minssen: Law Professor, CeBIL Director, University of Copenhagen and Global Visiting Professor at TUM in Spring 2024

"While much controversy remains, reaching an agreement on the EU AI Act was crucial since the time to decide where to regulate (or not regulate) is now. AI is evolving so rapidly, and it's already used by millions of citizens in many areas and stages of life on a daily basis. Both the risks and the opportunities are real, and we must address them swiftly if we want to keep control and reap the benefits from this technology in sustainable, safe and fair ways.

Given the complexity of the topic and the need for trade-offs, the delays accompanying the negotiations of the AIA were not surprising. Many of the AIA’s more restrictive provisions, appear to have been slowly watered down. Mostly due to industry interests and competitive concerns, regulatory thresholds have been slowly lowered and more traditional value-based boundaries have been stretched out. For better or worse, we can also see similar developments in the drafting of such guidelines the Chinese interim regulations on generative AI, as well as in Western countries such as in the US and UK.

It is clear that these changing policy perspectives illustrate how AI challenges our traditional values and concepts which signifies the enormous stakes of the task at hand. The impact not only on businesses, welfare, industry, innovation, and the knowledge commons, but also on individuals’ access to - and protection from - powerful technology are massive. Calls for banning or more regulation of specific applications in the EU due to ethical and value-based legal concerns and precautionary approaches, had - and will have - to be balanced against both competitive disadvantages and health risks due to missed opportunities by setting overly high regulatory thresholds. It can therefore be assumed that the significance of so-called regulatory sandboxes will grow, although this is a concept in need of further clarification.

It also seems to me that while rigorously protecting human rights and fundamental values, the AIA has with its risk category-based based aporoach generally taken an regulatory ”as open as possible, and as closed (i.e. regulated) as necessary” position. In particular with regard to lower risk AI systems, this could be good news for many innovators and SMEs developers, though some might have preferred even less rules. On the other hand, those concerned about the risks, or established companies with powerful IP portfolios and regulatory departments, might have preferred an ”as closed as possible, and as open as necessary” approach with high regulatory thresholds, be it to prevent risks - or (!) newcomers and competition.

In that regard it is also important to bear in mind that, essentially, the current debates that we see mostly concern high-level rules. What matters in daily life is how these are implemented where the action happens, as well as if the rules that we set are enforceable, feasible and if compliance can be monitored. If the choice is between having a jungle of extremely detailed rules with insufficient means for enforcement, and having less but very robust and enforceable rule, I definitely prefer the 2nd choice to increase the respect for the rules."

Armando Guio Espanol: Affiliate at the Berkman Klein Center for Internet & Society at Harvard University

"The EU AI Act is a regulation that not only will have an impact in Europe, but around the world. The adoption of this regulation will lead many countries to decide on their own rules regarding the development and implementation of this technology. It will be essential to follow the work that will come in the next months and the implementation process that will emerge in the European Union. In 2024 we will observe several countries join Europe in the official adoption of new regulations for AI systems. For example, several Latin-American countries are discussing now regulatory proposals for AI. The EU AI Act will have a considerable effect in mobilizing and accelerating many of these discussions. It will be also interesting to see how this policy fragmentation will impact the way this technology is being used and deployed, and the response of some of the biggest companies to this process."

 

Dirk Heckmann: Law Professor at the TUM School of Social Sciences and Technology, Direktor bidt.digital, Generative AI Taskforce

Whether the AI Act will prove to be effective regulation for AI will only be shown in legal practice. Political satisfaction does not replace legal certainty.

 

 

 

 

When: January 18, 2024
Where: Downtown Munich, Bayerischer Rundfunk (BR)
Hosted by: Bayerischer Rundfunk & TUM
Intended Audience: Researchers (professors, postdocs, PhD candidates) with an interest in AI and/or journalism and AI.

Join us in exploring the intersection of technology and journalism. This workshop focuses on getting to know the projects and methods being used in this exciting interdisciplinary field. And, to discover possible collaborations on use cases and methods. We look forward to connecting researchers in foundations, applications, and data! We explicitly encourage junior researchers to apply.

We want to hear from you if you work falls within one of the following topics:

Deadline for registration is December 5, 2023: https://collab.dvb.bayern/x/V5IuDQ

Participants will be selected within two weeks after submission deadline.

If you have any questions, please contact Emmelie Korell at emmelie.korell@tum.de

We look forward to seeing you in January!

"AI applications are only as good at medicine as the data sets they are trained on. So you need really good data sets, including from our patients in Germany. These should not be biased and should meet high quality standards. Then you get the best possible medical AI. And if it has been carefully proven that such an AI application can perform individual tasks better than doctors or existing software, then we at the German Ethics Council say that it should be made widely available. There are the very first examples, for example in diagnostics in imaging" says Alena Buyx, Chair of Germany's Ethics Council and member of the Generative AI Taskforce at the TUM Think Tank.

She gave an interview to Tagesspiegel Background on Medicine and AI and touches on topics like AI algorithms in psychotherapy, and the Ethics Council's stand on electronic patient records. "Artificial intelligence is a tool in medicine", says Alena Buyx. On the question of ethical evaluation of AI, a central review board could be helpful, she says.

The full interview is available by free subscription at: Tagesspiegel Background.

“I am very pleased that, together with Microsoft, we have succeeded in developing an AI chat system tailored to the requirements of the Fraunhofer-Gesellschaft — and we did so in a very short time,” says Prof. Ingo Weber, director of Digital Transformation and ICT Infrastructure at the Fraunhofer-Gesellschaft. “We have found that many colleagues wish to use chat-based AI applications for their work and for research. However, public solutions available so far are problematic for work-related purposes, especially when it comes to data protection, confidentiality and information security.”

Read the full report here.

Weber was also recently invited to the prestigious Dagstuhl seminar on the topic of software architecture and machine learning. Read more on that here.

Prof. Dr. Ingo Weber

“We are currently preparing students for jobs that don’t yet exist, using technologies that haven’t been invented, in order to solve problems, we don’t even know are problems yet.“ Richard Riley (U.S. Secretary of Education under the Administration of President Clinton)

Generative Artificial Intelligence (AI) with its ability to generate synthetic data is considered a revolution in machine learning. The example of ChatGPT shows that such technology can not only automate text creation but also increase human creativity. Despite the current limitations and challenges associated with the use of such technology, the profound public curiosity led to a record-breaking one million users within only five-day after its launch. Since the release of ChatGPT to the public in November 2022, we are observing a consistent reduction in technological cycles of Generative AI.

These technologies hold an unbounded potential to foster and promote adaptive, cooperative, and immersive learning environments tailored to individual learners. Characterized by their ubiquity, adaptability to the learner, cost-effectiveness, they bear the potential to serve as tools for user empowerment in the large scale. Such advancements promise to bring us a big step closer to realizing the UNESCO Education 2030 Agenda, advocating for a human-centered approach to AI, which fostering inclusivity and equity in education. In line with the mission statement "AI for all", the aim is to ensure that this technological revolution benefits everyone, especially in the areas of innovation and knowledge dissemination and is used in a responsible way.

Fostering creativity and critical thinking

Hence, to adequately equip learners for their future professional and personal goals, it is crucial to provide them with competencies in addition to basic knowledge. These competencies should enable learners to compete in an environment where numerous tasks are automated, complex cognitive processes are required, personal responsibility and interpersonal skills are increasing, and interdisciplinary collaboration is the basis for solving complicated societal problems. The mandate for education is therefore to evolve from tasks rooted in routine and impersonality to tasks that are personalized, multifaceted, and creative. We need to develop strategies that promote multiple competencies beyond traditional curricula, with an emphasis on fostering creativity, critical thinking, collaboration, and communication.

We are currently facing an exciting and disruptive change in education. The most important question remains: How can we democratize access to innovation and knowledge, create a more equitable and inclusive academic landscape, and meet the demands of a world in transition? What pressing challenges do we need to address towards this goal? This is where I am also interested in a perspective from the neurosciences - from my colleague Aldo Faisal: How can interdisciplinary approaches, and in particular insights from neuroscience, help to distinguish between the outputs and utterances produced by an AI and those produced by humans?

Global discourse series "One Topic, One Loop"

Four people from four different countries and four different universities discuss a current topic in research and teaching. The series begins with an initial question to which the first person responds and asks the next person another question on the same topic. The series ends with the first person answering the last question and reflecting on all previous answers. The topic of the first season is Large Language Models and their impact on research and teaching.

Find the whole series with Aldo Faisal, Professor of AI & Neuroscience at Imperial College London, Jerry John Kponyo, Associate Professor of Telecomunnications Engineering at Kwame Nkrumah' University of Science and Technology and Sune Lehmann Jørgensen, Professor at the Department of Applied Mathematics and Computer Science at Technical University of Denmark here.

The author

Prof. Dr. Enkelejda Kasneci is the co-lead of the Generative AI Taskforce, here at the TUM Think Tank. She is also heading the Chair of Human-Centered Technologies for Learning at the TUM School of Social Sciences and Technology and is director of the TUM Center for Educational Technologies and a member of the Munich Data Science Institute at TUM. Before being appointed to a professorship at TUM, Enkelejda Kasneci studied informatics and conducted research into human-machine interactions at the University of Tübingen.

Artificial intelligence (AI) has the potential to fundamentally change the scientific landscape. In a state parliament hearing in the Committee for Science and the Arts, Prof. Dr. Enkelejda Kasneci, co-leader of the Generative AI Taskforce at the TUM Think Tank, was asked together with other experts about the opportunities and risks of AI in higher education. The discussion revolved around preparing students and faculty to use AI, the role of AI tools such as ChatGPT in writing, and the need for open and accessible use of AI tools in libraries. Despite some concerns, the experts emphasized the positive impact of AI and advocated for an optimistic view of the future of academia.

The co-leader of the Generative AI Taskforce emphasized that generative AI is steadily advancing and has ever-shorter technological cycles. This, she said, opens up opportunities for active, collaborative and immersive learning environments that are individually tailored to learners' needs, thus paying into the UNESCO Education 2030 Agenda, which calls for a human-centered approach to AI in education to advance inclusion and equity. Under the motto "AI for All," everyone should benefit from the technological revolution and reap its rewards, especially in the form of innovation and knowledge.

Basic competency goals in academic writing will still be maintained and will not be replaced in the long term. However, the introduction of AI writing tools requires adaptation, where integration should be done responsibly. Legal issues were also highlighted during the hearing, such as copyright, privacy, and liability. Universities should carefully consider these issues and take appropriate measures to protect the rights of all stakeholders, Kasneci said.

Although some students and faculty appreciated the efficiency and support of AI writing tools and certainly saw advantages in time savings, generation of ideas, and error detection, there was also some skepticism about automated text generation. Concerns about plagiarism and data protection could also lead to acceptance problems. According to Kasneci, students and faculty have reservations predominantly about the accuracy, reliability, and ethics of AI-generated texts. There could be a sense of loss of control if AI writing tools are perceived as a substitute for traditional writing skills. Therefore, she said, it is important to acknowledge these concerns and provide comprehensive education, training, and guidance to promote student and faculty confidence and acceptance.

In general, the experts agreed that a "calibrated trust" in AI is necessary in the scientific community. This means that students and teachers should be prepared for the use of AI in order to make the most of the opportunities offered by this technology. It was emphasized that AI tools such as ChatGPT can automate writing and increase creativity, allowing students and faculty to focus on more challenging tasks.

Kasneci appealed, "Education needs to move from routine and impersonal tasks to more personal, complex and creative tasks. We need to find ways to enable the promotion of multifaceted competencies beyond curricula and syllabi in higher education, with a strong focus on creativity, critical thinking, collaboration and communication."

She adds, "Overall, we are facing an exciting time of change in education. The question will be how do we make innovation and knowledge accessible to all, enable a more equitable and inclusive education landscape that meets the demands of a disruptively changing world."

It is not only in higher education that there is an urgent need for action. In the Handelsblatt, Kasneci recently called for a " revamping of the curricula". Teaching is "far too fragmented" - with the support of AI, it will be easier to teach "holistically" in the future. To achieve this, however, the education ministers must ensure that all teachers acquire a basic knowledge of AI.

Enkelejda Kasneci is a co-founder of the newly established TUM Center for Educational Technologies, where interdisciplinary research teams investigate the effectiveness of digital tools for learning and teaching and develop new applications. The center will bring these into practice through advanced training and by supporting start-ups.

Each week, we will introduce you to one of the members of the Generative AI Taskforce at the TUM Think Tank. Get to know their take on the latest developments within the field of Generative AI, representing various perspectives and fields.

On our first episode: Meet Georg Groh, from the TUM School of Computation, Information and Technology.

Now out on our YouTube Channel:

Don't lose track of innovation.

Sign up to our newsletter and follow us on social media.