Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Table of Contents
Executive Summary ................................................................................................................... 2
Introduction ............................................................................................................................... 7
Analysing, Fact-Checking and Verifying Multimedia Data....................................................... 9
Key concepts............................................................................................................................................ 9
Operational Models ............................................................................................................................. 9
Challenges of User-Generated Content .............................................................................................11
Hybrid Approach ................................................................................................................................12
Tools for Analysing, Fact-Checking and Verifying Multimedia ............................................................14
Currently Available Tools ...................................................................................................................14
Using Dashboard Systems & Future Technologies ............................................................................15
Humans versus Technology ..................................................................................................................17
Humans perform poorly at detecting photographic manipulations .................................................17
2
Executive Summary
The sixth Ridgeway workshop in the series on open source research methodologies in support of non-
proliferation was held in Vienna on 8-9 May 2018. It addressed three crucial aspects of working with
multimedia data: analysis, fact-checking and verification. The workshop had three main goals. First, it
aimed to highlight and demonstrate the affordances of tools currently available or in development for
the collection and analysis of multimedia data, in particular data from social media platforms. Second,
the workshop presented a series of use-case scenarios, in which expert speakers offered insights
based on their experiences of successful projects involving multimedia data collection, analysis, fact-
checking and verification. Third, it facilitated extensive discussion and lesson learning about the
challenges of analysing, fact-checking and verifying multimedia data.
To fulfil these objectives, we brought together academics, subject experts and other professionals
from various backgrounds, ranging from art history and journalism, to photography and psychology.
This cohort of invited speakers generated an interdisciplinary environment in which the workshop
afforded all participants the opportunity to explore and learn from practices developed in other fields,
stimulating a richer and more wide-ranging experience than a narrower focus or more homogenous
group of participants would have allowed.
3
Key Points
A summary of the key discussion points and issues worthy of further consideration are given below.
1. The Open Source operating model and the constantly changing Open Source landscape
• Given the constantly changing and transforming Open Source (OS) landscape, it is crucial for
organisations to understand how to effectively and efficiently operate in this environment.
• In particular, an OS operating model should clarify what activities an organisation needs to
pursue; how these activities are inter-connected; and who should perform these activities.
• Generally, an OS operating model entails five crucial steps, namely: filtering; capturing &
storing; validating; producing; and reporting.
• Verifying User-Generated Content (UGC) poses significant challenges for journalists and those
working with UGC. What speakers referred to as alternative facts and ‘fake’ or ‘f***’ news
(misinformation and disinformation) constitute further challenges.
• To effectively and efficiently deal with these problems, increasing digital literacy is key – not
only among journalists, but also for the audience consuming digital media.
• The Reuters Institute for the Study of Journalism’s Digital News Report 2017 stated that a
quarter (24%) of its survey respondents thought social media did a good job in separating facts
from fiction, as compared with 40% of respondents who thought the same for the more
traditional news media.
2. A hybrid approach, combining analytical expertise and technological innovations, is still
considered to be the best method for working with multimedia information
• Hybrid approaches combine technology with the contribution of human analysts, who can
provide insights to exploit the capabilities of technology and thereby secure the best
performance of both domains. Technology is still not a ‘magic bullet’ and therefore
organisations should continue to employ a hybrid approach.
• In some cases, large volumes of multimedia data must be processed (stored and disseminated
to those that need it), analysed, fact-checked (confirming the contents of the data) and
verified (identifying the provenance of the data), which ultimately poses a significant capacity
and capability challenge to organisations.
4
• Automation can assist this process, but it also has its limitations, especially when it comes to
fact-checking and verifying multimedia data, which oftentimes requires the contribution of
human experts to deliver successful verification.
• Furthermore, there are problems with, e.g., automated translations and transcriptions, which
are still not effective in capturing crucial contextual and linguistic nuances, and thus require
the active participation of human linguists and country- or subject-matter-specialists.
• Automation is most helpful at the earliest stages, particularly in filtering and processing data
at scale; to analyse, fact-check and verify certain data, human analysts are then required,
underlining the necessity of this hybrid approach.
• Collaboration among analysts and other subject-matter experts is crucial when analysing, fact-
checking and verifying multimedia, especially UGC. There are examples of ‘crowd-sourced’
collaborations to enable fact-checking and verification in a more efficient, effective and faster
manner. When ‘crowd-sourcing’ is not an option, perhaps because of classification or
sensitivity, there are ways of achieving similar ends by other means, e.g. by creating
collaborative workspaces, either physical or virtual/digital, the ‘right’ solution is dependent
on contingent factors, such as the size of the organisation, the geographical dispersion of its
workforce, budgetary constraints, and the nature of the analytical problem to be solved.
• Clearly, true ‘crowd-sourcing’ is most useful when dealing with non-sensitive equities and
issues, as external collaborations potentially reveal data sources and other relevant, protected
information; but the potential value of ‘crowd sourcing’ approaches underlines the
importance of collaborative, interdisciplinary problem-solving in multimedia analysis.
3. There are already several, free-at-the-point-of-use technological solutions for analysing,
fact-checking and verifying multimedia data
• Organisations contemplating upgrade in multimedia analytical capability do not need to
‘reinvent the wheel,’ as many useful and free-at-the-point-of-use tools already exist to
analyse, fact-check and verify multimedia data.
• However, there are several obstacles to be overcome in extracting optimal value from this
broad ecology of current tools. Much like the well-known problem of vanishing internet
sources; a major problem is transience of many of these tools and techniques, which can
vanish overnight at the developer’s or new owner’s whim without being replaced. Similarly,
the same tool may change its functionality from one day or week to the next, depending on
its development and salient second-order factors like inter-operability with multimedia
platforms’ respective application programming interfaces (APIs) – the interface that tools use
5
to access the multimedia platform. Therefore, it is imperative for organisations to constantly
review the availability and affordances of existing tools and techniques, as well as to identify
and map new tools and emerging trends in the OS tool landscape.
• Developing in-house tools might overcome the problem posed by the transience and
inconsistent functionality of external tools, but, aside from internal capacity constraints in tool
development, any in-house solution would also be as vulnerable as externally-developed tools
to the same changes in social media platforms’ APIs and the wider policy and regulatory
environment shaping the accessibility of multimedia data.
4. Flexibility versus ‘All-in-One’ Dashboard Systems
• In theory, ‘All-in-One’ dashboard systems provide analysts with critical data and analytical
options at-a-glance, thereby saving valuable time and increasing productivity;
• Organisational ethnography can offer useful insights to inform the process of developing and
rolling out new processes to analyst-users, pursuing a user-centric approach to capability
development;
• Flexibility is very important when dealing with multimedia data; the use of overly rigid, ‘all-in-
one’ dashboard systems was found by several participants to have been ineffective in practice
as it does not permit swift adaptations to the constantly changing OS environment;
incorporation of feedback loops and periodic points of review and prospective revision are
important elements in a more flexible approach to capability development and
implementation;
• The lack of well-established and proven, correctly-functioning technologies in this field is
another explanation for the perception that dashboard systems are currently less useful than
they might become in future for analysing, fact-checking and verifying multimedia data;
• Most notably, dashboard systems were found by several participants to have significantly
restricted OS analysts’ choices regarding which multimedia data sources could be used to
solve an analytical problem.
5. Research suggests that humans are bad at naked-eye detection of photo manipulation
• Research has demonstrated that humans perform very poorly when it comes to naked-eye
detection of altered photographs; those who were more familiar with photography, however,
performed slightly better;
6
• There are simple, manual techniques to improve detection of photographic manipulations,
but these only help when certain factors, e.g. shadows and reflections, can be found in the
relevant photographs or videos;
• Raising awareness of these techniques could improve human analysts’ abilities in this area,
but more research is needed to better understand why humans fail so often at this task, and
how humans can improve their detection skills;
• Another technique, borrowed from Art History, proposes a verification model which highlights
the dynamic interplay between the analysis of forensics as well as visual and contextual
information; as with earlier reference to hybridity, developments in technical visual forensics
should also be incorporated into any organisational effort to improve visual verification
techniques.
7
Introduction
Ridgeway Information, a King’s College London spin-out company, is running a series of practical
workshops on Open Source (OS) methodologies in support of non-proliferation research. This series
explores how the information landscape is developing, especially in the digital domain; the role of
technology in providing new capabilities for analysis; and how new analytical techniques, including
those developed and implemented in other fields, can help to evaluate and verify OS information. This
series is funded by the Carnegie Corporation of New York and runs from July 2016 to June 2018.
The sixth workshop, titled ‘Analysing, Fact-Checking and Verifying Multimedia Content’ was held on 8
and 9 May 2018 in Vienna, Austria, at the Vienna Center for Disarmament and Non-Proliferation
(VCDNP), with the participation of the Department of Safeguards from the International Atomic
Energy Agency (IAEA).
The growth in the volume and importance of multimedia information is reshaping the OS information
landscape and driving developments in the use of innovative technologies and new OS research
methodologies to collect, process, and analyse this information. In today's digital environment, with
false information circulating and spreading, evaluation and verification have become even more
consequential and essential analytical activities than ever before. In this respect, the sixth workshop
specifically addressed the analytical and fact-checking process, as well as wider issues pertaining to
the verification of multimedia, especially social media data.
The workshop was structured into six panel groups and one ‘breakout’ discussion session that was
designed to encourage and foster interdisciplinary engagement and debate. The workshop focused
on three related issues: (1) the challenges and opportunities in analysing, fact-checking and verifying
multimedia information; (2) current tools available or in development for the collection, analysis, fact-
checking and verification process of multimedia data, in particular on social media; and (3) lastly, use-
case scenarios, in which expert speakers gave insights into their experiences and workflows of
multimedia data collection, analysis, fact-checking and verification. This workshop series followed on
from a previous series of workshops, also funded by the Carnegie Corporation of New York, and hosted
in Vienna between January 2015 and May 2016. These previous workshops looked at the following
topics:
8
• Workshop One: the exploitation of big data research methodologies and applications in
support of non-proliferation;
• Workshop Two: the exploitation of financial OS information in support of non-proliferation;
• Workshop Three: the challenges and solutions to the effective collection and analysis of OS
information on states’ nuclear-relevant scientific and technical capabilities;
• Workshop Four: the challenges and solutions to the effective collection and analysis of OS
information on both state and non-state nuclear non-proliferation strategies;
• Workshop Five: addressed a range of issues relating to the use of OS research methodologies
to assess the exploitation of multimedia information.
9
Analysing, Fact-Checking and Verifying Multimedia Data
Key Concepts
Operational Models
The OS landscape has continuously evolved over the years as technology has progressed.
Organisations need to understand how to best operate within this rapidly changing multimedia
environment. Given the enormous amounts of OS data that needs to be processed, analysed, fact-
checked and verified, it is thus necessary, for big and small organisations alike, to have implemented
an operating model, which ensures a structured approach for observing, understanding and explaining
phenomena of interest, including by using multimedia information.
Implementing an operating model is paramount, especially when dealing with multiple sources and
content that comes in various formats, such as video, audio, text, image, or some combination of these
and/or other formats. Three key things underline the necessity of adopting an operating model. First,
these models enable more effective and efficient workflows, as well as allowing for the reproduction
and audit of each of the steps involved in the analytical process. Second, implementing an operating
model brings clarity to people, processes and products. In other words, it helps to clarify what needs
to be done, how it will be done, and who is in charge of what. And lastly, operating models define
‘what success looks like,’ so that all relevant stakeholders in the programme know the precise
objectives and how performance will be measured.
10
Figure 1 An open source operating model: observe, understand & explain (© Dr Chris Westcott)
The operating model presented at the start of the workshop entailed five steps, with each step
clarifying ‘the what, who and how’ of OS analysis (Fig.1). The first step, ‘filtering’ – deciding what to
collect and what to ignore – needs to outline three things: (1) what the consequences of filtering are;
(2) who decides what to collect; and (3) how filtering-related decisions are made. The second step in
the operating model is dedicated to ‘capturing and storing’ multimedia information. Once again, it
highlights three critical questions, namely (1) what will be stored and captured (e.g. metadata, image
resolution); (2) how it will be achieved; and (3) who is responsible for each step in this process. Phase
three is concerned with validation by focusing on the source and content validation. In other words,
is the source what/who it claims to be? Furthermore, can we verify the content, specifically the
questions of what, who, when, where, why, and how? Step four concerns the production of the final
report, which again highlights a series of factors: (1) what is produced; (2) how it is produced; and (3)
who is doing what. This in turn leads to the final step, namely delivering the final product, defining
what, how and who will deliver the findings, which is as essential as the preceding steps in ensuring a
successful process. This is additionally complicated when including multimedia rather than text-based
sources. Most importantly, the operating model presented at the workshop should not be perceived
as a linear model, in which each step needs to be taken before moving on to the next one. Instead,
there is much overlap and internal feedback. Similarly, models must be tailored to the unique contexts
and requirements of each organisation. However, thinking explicitly in terms of an OS operating model
should help any organisation to effectively shape its capability development and implementation
process.
11
Challenges of User-Generated Content
Another important lesson from the workshop concerned the analytical opportunities and challenges
posed by the proliferation of User-Generated Content (UGC). Working with UGC, such as data from
social media platform activities, presents significant challenges for fact-checking and verification. This
is particularly the case when dealing with breaking news or crises: oftentimes rumours and claims are
circulated on social media platforms before any other media platform. In this environment
‘alternative’ facts and the so-called ‘fake’ (or ‘f***’) news (i.e. misinformation and disinformation), as
described by one of the presenters, constitute particular analytical challenges, given the potential for
false or slanted statements to be disseminated (and amplified, e.g. by botnets) on social media
channels. These practices highlight the importance of sustained analytical attention to the social
media landscape during relevant events. It is also necessary to bear in mind the breadth and variety
of social media usage across the globe, with a rich ecology of local, regional and subject-specific
networking sites to complement the major international platforms such as Facebook, Instagram and
Twitter.
In the case of journalism, for example, the role and skills required of investigative journalists has
significantly changed over the last 15 years. In today’s information landscape, journalists must be ‘part
detective and part librarian’. That is because of several related factors that are reshaping digital
analysis. First, some governments do not want to speak about certain events or seek to deny access
to reporters. Journalists have, therefore, become more reliant on collecting and using UGC as a basis
for their stories. Second, the emergence of ‘alternative facts’ as a media issue, which participants
distinguished from the related phenomena of mis- and disinformation. ‘Alternative’ facts are created
by using the same information as traditional, ‘un-slanted’ reporting, but this information base is then
qualified and presented from a different perspective, in order to offer an ‘alternative’ narrative and
support (sometimes sharply) different conclusions from more mainstream outlets. Whereas, ‘fake’ or
‘f***’ news is the creation and dissemination of outright fabricated stories that are intended to appear
credible and to be ‘shared virally’ on social media platforms. Increasing digital literacy is therefore vital
– not only among journalists but also the wider audience that increasingly consumes news via social
media, mobile telephone or tablet ‘apps,’ and other forms of digital communication. As the recent
Reuters Institute for the Study of Journalism's Digital News Report 2017 has demonstrated, “only a
quarter (24%) of our respondents think social media do a good job in separating fact from fiction,
compared to 40% for the news media.” (p. 9) [see Fig. 2 compared to Fig.3].
12
Figure 2 Social media does a good job in separating fact from fiction (© Dr Chris Westcott)
Figure 3 News media does a good job in separating fact from fiction (© Dr Chris Westcott)
Hybrid Approach
As in the previous workshop, a hybrid approach was still perceived to be the best method for working
with multimedia information. A hybrid approach to multimedia information combines automation in
specific areas with the contribution of human analysts to exploit the best performance in both
domains. In this approach, automated, computer-driven approaches are supported by human
analysis, contextual and subject-matter expertise.
Multiple case studies were presented during the workshop to highlight the utility and continuing
importance of such a hybrid approach. Some case studies even demonstrated that humans performed
considerably better in the geolocation of images and videos, due to the lack of available and
appropriate technological solutions to these problems. Speakers noted, however, that technology will
continue to play a crucial role, but it should still not be perceived as a ‘magic bullet’, especially for the
effective conduct of fact-checking and verification.
13
The ‘breakout’ session allowed a more in-depth discussion about these issues, and workshop
participants advocated a series of automated approaches and technologies, in particular for the
collection and processing of large volumes of multimedia information, as this represents a part of the
analytical process that requires significant technological input. However, automation was also
perceived to have its limitations, particularly in the down-stream activities of fact-checking and
verification, which were widely considered to require an integral human dimension. This, in turn,
highlights the abiding necessity and utility of the hybrid approach.
In the case of automated translations and transcriptions, for example, participants argued that,
despite significant improvements in this area, current translation technologies still do not capture vital
linguistic and contextual nuances, which ultimately underlines the continued requirement for linguists
and subject-matter experts to be involved in this important process. A further theme which emerged
during several presentations and the ‘breakout’ session was that of ‘crowd-sourced’ collaborations. In
particular, smaller organisations which lacked core staff numbers and attendant expertise, or which
simply required outsiders’ help with the geolocation of target images or videos, took the step of
engaging with external individuals via ‘crowd-sourced’ digital research collaborations. These
collaborations can take many forms, such as one-off, shorter- or more longer-term collaborations. One
case study presented in a panel group outlined an ‘outsourced’ approach to the task of verification.
To this end, social media followers and the broader digital public were asked to help verify the location
of a short video clip. Other case studies further demonstrated the utility of crowd-sourcing in the
process of verifying photographic images and videos. Participants observed that video and
photographic verification can be a tedious and time-consuming process, which makes it even more
attractive to consider asking a wider public for assistance.
Collaboration among analysts and other entities can be immensely helpful for analysing, fact-checking
and verifying multimedia. Not all organisations will be able to avail themselves of ‘crowd-sourcing’
solutions to verification. For example, the classification or sensitivity of equities or information
requirements may close off this option to organisations. Although ‘crowd-sourcing’ is not therefore
for everyone, there are a number of alternative options to replicate the advantages of a more
distributed, collaborative approach to verification. For example, organisations can establish
collaborative workspaces – either physical or virtual/digital – in order to foster linkages and synergies
between analysts with different areas of expertise and complementary skills. The main advantage of
collaborative workspaces is to share knowledge and expertise, but amenable projects only emerge if
all parties involved are able, willing and (informally or formally) supported in sharing their expertise
and knowledge within the organisation. When dealing with sensitive information, for instance, the
14
parties involved are likely to be more constrained in the practice of sharing knowledge beyond (even
intra-institutional) boundaries and to thereby engage in productive analytical collaborations.
Tools for Analysing, Fact-Checking and Verifying Multimedia
Currently Available Tools
One of the goals of the workshop was to highlight and introduce to delegates the wide range of
affordances provided by tools that are currently available, especially those that are free-at-the-point-
of-use, as well as those that are in later phases of development, for the purposes of analysis, fact-
checking and the verification of multimedia data. In this regard, guest speakers were invited to
demonstrate various tools and techniques that are used in a workplace context on a daily basis and,
moreover that address current challenges and opportunities in using technical solutions to improve
an organisation’s ability to analyse, fact-check and verify multimedia data. One theme that emerged
during these presentations and the ‘breakout’ discussion session was the existence of free technical
solutions to analyse, fact-check and verify multimedia information. In fact, the main argument put
forward was that, for organisations considering a capability upgrade, ‘there is no need to reinvent the
wheel,’ as effective and free-at-the-point-of-use tools already exist (see Fig. 4).
Figure 4 A curated list of free tools, compiled by Mr Eoghan Sweeney (First Draft News)
Nevertheless, workshop participants and guest speakers alike highlighted several major problems with
the current landscape of free-at-the-point-of-use tools and services. Some of these tools, as
tremendous and useful as they were, had vanished from the Internet suddenly (in some cases, literally
15
overnight) and without prior warning to service-users, and were not subsequently replaced by their
developers or owners. This includes not only tools and services, but also specific features on websites
and social media platforms. The evanescence and transience of these affordances significantly affects
and undermines the way in which multimedia information could be obtained, analysed, fact-checked
or verified with the help of these online services. In a similar way, as older tools vanished from the
Internet, new tools and services emerged. It is, therefore, imperative for organisations to ensure a
rolling baseline of knowledge of new developments in this landscape, both in terms of developments
within existing tools and the creation of new tools and techniques.
The development of ‘in-house’ OS research tools within organisations was mentioned in the ‘breakout’
discussion session as a potential solution to the transience problems noted above. In-house building
of custom tools, especially for automation purposes, could protect an organisation from losing
valuable tools and services at the whim of an external developer or the new owner of a service.
However, such developmental efforts also require extensive organisational investment, e.g. in
software development and user-testing, before it can be adopted within the organisation at the desk-
level. Moreover, in-house tools would be subject to the same vagaries of the social media operating
model as externally-developed tools, such that they may ultimately be no less reliable through time.
Integrating in-house tools and services into existing, especially externally-generated dashboards could
also present another technical challenge, which consequently could affect the overall performance of
the dashboard itself.
Another important issue that emerged during the ‘breakout’ discussion session concerned the
accessibility of Application Programming Interfaces (APIs) for a wide variety of sites of interest,
including social media platforms. Web services and social media platforms, for instance, provide APIs
to allow web developers to integrate their services or platforms into newly built software applications.
These APIs can also be utilised for automated data collection and analysis purposes. If the accessibility
of APIs, however, was significantly limited due to policy changes by the company or service provider,
or simply through routine site maintenance and upgrade, then any third-party tool or dashboard
system, which used the API would potentially experience a degradation or complete loss of service,
and such an impact would be highly likely to occur without prior notice from the social media
platform’s engineers. Therefore, analysts and those procuring especially commercial tools need to be
aware of the vulnerability of third-party tools to changes in APIs.
Using Dashboard Systems & Future Technologies
Several case studies demonstrated that the relevant multimedia data that needed to be analysed,
fact-checked or verified, came from multiple sources and in various formats. Sources can include social
16
media, foreign-language television and radio broadcast programming, satellite imagery, and many
other formats. Given the enormous amount of multimedia data currently available, dashboard
systems are invented and used to automate data collection, analysis and visualisation to help analysts
make informed and faster decisions. To this end, several dashboards were presented during the
workshop, which are all in use in different workplace contexts, such as analysis of global
environmental issues, politics or journalism.
The main advantage of dashboard systems is that analysts can solely focus on the analysis rather than
being preoccupied with finding, obtaining and processing of relevant data. Furthermore, automated
data collection, analysis and visualisation can significantly save valuable time, which ultimately can be
re-allocated to the analysis or production phase.
When designing dashboard systems, ethnography, which is defined as the systematic study of people
and cultures, can significantly inform the development of dashboard systems. That is because
ethnographic studies aim to understand the environment and culture of the organisation, particularly
their workflows, methodological practices, needs and desires, which will be reflected in the
development of the dashboard system. Once the dashboard is developed, evaluations will follow to
establish whether the dashboard has met all objectives and, most importantly, if analysts find it useful
or not.
In this regard, a debate emerged during the breakout session and after the presentations as to what
extent are dashboards necessary for analysing, fact-checking and verifying multimedia data. One
statement put forward was that flexibility was key when dealing with multimedia data, especially in a
constantly changing and evolving OS environment that requires constant adaptation. In other words,
using overly restrictive dashboards would not allow sufficient flexibility. This was consistent with the
general consensus of the workshop participants, that the use of all-in-one dashboard systems was
sometimes found to be ineffective and therefore inappropriate. This does not mean that dashboards
are useless, but their utility strongly depends on an organisation’s needs, requirements and
expectations.
Furthermore, the lack of well-established, correctly-functioning technologies with a proven track
record of success is another reason why dashboard systems are currently seen as less useful than they
might in future become for analysing, fact-checking and verifying multimedia data. For example,
dashboard systems significantly restrict analysts regarding selecting data sources as dashboards often
come already pre-loaded with a prescriptive list of data sources. However, there are also cases in
which dashboards can significantly support the work of organisations, such as analysing and visualising
17
public opinion by using multimedia data from social networking platforms such as Twitter, for
example. In short, the more flexible a dashboard is in terms of updating its data sources and the
affordances of data analysis, the more valuable that dashboard becomes, but also at the potential risk
of over-complicating its use and iteratively-revised design. This is a considerable challenge
necessitated by the ever-evolving OS landscape.
Humans versus Technology
Humans perform poorly at detecting photographic manipulations
During the workshop, research was presented which specifically studied humans’ abilities to detect
photographic manipulation with the naked-eye. Given the considerable increase of ‘alternative’ facts,
mis- and disinformation, and the widespread dissemination of false information on social media
platforms and other websites, there is a clear need for forensic photographic manipulation detection
skills and tools. Estimating the true size of this problem is difficult, if not impossible, as humans and
technical solutions alike often cannot easily determine whether or not an image or video has been
manipulated.
From a technological perspective, reverse-image searching can quickly help to verify the authenticity
of a photograph or video, but, in some cases, reverse image searches are not always successful. This
issue was also apparent in some case studies that were presented as well as during the breakout
session. To this end, analysts are required to enhance their abilities in spotting photo manipulations.
This does not imply that training would solve the entire problem, but certain techniques, especially
those that use the existence of reflections or shadows in the target images and videos, can
considerably inform the overall conclusion of analysts.
Overall, current research in detecting photographic manipulations suggests that humans perform very
poorly. Those who are familiar with the principles and practice of photography fare only slightly better.
Further research showed that the provision of a small amount of training in detecting photographic
manipulation does not yet have a significant impact on performance. In summary, humans do not
easily perceive granular information about lighting and reflection. Analysts are unlikely to identify
photographic manipulation based on physically impossible shadows and reflections.
Fortunately, there are simple techniques to detect photographic manipulations with the use of
shadows and reflections in an image or video. By drawing multiple lines from several points of the
shadow or reflection through the corresponding point of the object that reflects or casts that
18
particular shadow, people can test if all lines converge at the same vanishing or direction point. If
that’s the case, then the source of lighting or reflection is genuine, which ultimately means that
shadows or reflections were not manipulated in this manner. However, even if shadows or reflections
were not manipulated, the photo of interest could still have been manipulated by adding or removing
objects, for example. The overall conclusion of this panel discussion was that training and especially
awareness campaigns, educating analysts in the available techniques to improve photographic and
video interpretation, might improve analysts’ abilities to detect photographic manipulation.
Another panellist presented a potentially useful technique, borrowed from the academic discipline of
Art History, to derive a hybrid verification model, highlighting the dynamic interplay between the
analysis of forensic as well as visual and contextual information. In other words, photographs often
come with descriptions or headlines that can both inform and misinform the analyst, as context
influences the reception of the image. This concluded the workshop by invoking a persistent theme:
the need for hybridity in the successful analysis, fact-checking and verification of multimedia data,
drawing on multiple disciplines and fields of expertise, as well as effectively combining analysts’
expertise with the best affordances of automation.