<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (NISO Z39.96-2019) Journal Publishing DTD v1.2 20190208//EN" "https://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="1.2" xml:lang="en" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <front>
    <journal-meta>
  <journal-id journal-id-type="publisher-id">CCR</journal-id>
  <journal-title-group>
    <journal-title>Computational Communication Research</journal-title>
  </journal-title-group>
  <issn pub-type="ppub" />
  <issn pub-type="epub">2665-9085</issn>
  <publisher>
    <publisher-name>Amsterdam University Press</publisher-name>
    <publisher-loc>Amsterdam</publisher-loc>
  </publisher>
</journal-meta><article-meta>
      <article-id pub-id-type="publisher-id">CCR2026.1.2.ELDA</article-id><article-id pub-id-type="doi">10.5117/CCR2026.1.2.ELDA</article-id><article-categories><subj-group subj-group-type="heading"><subject>Article</subject></subj-group></article-categories><title-group>
        <article-title>Visual Framing in the AI Era: Lessons from Manual Approaches for Computational Methods</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <name>
            <surname>Damanhoury</surname>
            <given-names>Kareem El</given-names>
          </name>
          <aff>Department of Media, Film &amp; Journalism Studies, University of Denver, Denver CO USA</aff>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Winkler</surname>
            <given-names>Carol</given-names>
          </name>
          <aff>Department of Communication, Georgia State University, Atlanta GA USA</aff>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Lokmanoglu</surname>
            <given-names>Ayse D.</given-names>
          </name>
          <aff>Emerging Media Studies, Boston University, Boston MA USA</aff>
        </contrib>
        <contrib contrib-type="author">
          <name>
            <surname>Glanz</surname>
            <given-names>Keyu Alexander Chen</given-names>
          </name>
          <aff>Department of Communication, Georgia State University, Atlanta GA USA</aff>
        </contrib>
      </contrib-group>
      <pub-date pub-type="epub"><year>2026</year></pub-date><volume>8</volume><issue>1</issue><fpage>1</fpage><permissions><copyright-statement>© The authors</copyright-statement><copyright-year>2026</copyright-year><copyright-holder>The authors</copyright-holder><license license-type="open-access"><license-p>This is an open access article distributed under the CC BY 4.0 license <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link></license-p></license></permissions><abstract>
    <title>Abstract</title><p>Computational methods can minimize the time and resources needed to manually code thousands of images. Yet, they also come with challenges, including validation, algorithmic bias, and privacy concerns. Acknowledging that the pictorial turn has now entered a computational phase, this article reports on a manual and automated coding of 7000+ images to better understand online extremist content. Using Rodriguez and Dimitrova’s (<xref rid="bib.bibx91" ref-type="bibr">2011</xref>) four-tiered model of visual framing, the study compares manual and OpenAI’s ChatGpt4o’s coding of Al-Qaeda and ISIS images across the denotative, semiotic, connotative, and ideological levels. AI coding exhibited moderate to strong performance on denotative variables but was weaker in the semiotic and connotative tiers. The study concludes with a discussion of the advantages of human and AI functioning together to better understand visual framing.</p>
  </abstract>
      <kwd-group>
        <title>Keywords:</title><kwd>AI; Visual Framing; Extremism; Photography; ISIS; Al-Qaeda</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="S1">
      
      
      
      
    <title>Introduction</title><p id="S1.p1">An explosion of visual images characterizes the 21st century public sphere. Individuals capture 57,000 photos every second (Broz, <xref rid="bib.bibx9" ref-type="bibr">2023</xref>), and daily share over 3 billion images on social media (JasenkaG, <xref rid="bib.bibx51" ref-type="bibr">2023</xref>) and one million memes on Instagram (Mcgil, <xref rid="bib.bibx75" ref-type="bibr">2024</xref>). Out of the more than 400 million daily terabytes of online data, video content accounts for more than half (Duarte, <xref rid="bib.bibx29" ref-type="bibr">2024</xref>). On YouTube alone, users upload more than 500 hours of video every minute (Ceci, <xref rid="bib.bibx13" ref-type="bibr">2024</xref>). Today, visuals constitute an imperative mode of messaging for virtually any communicator seeking to reach target audiences (Knobloch et al., <xref rid="bib.bibx55" ref-type="bibr">2003</xref>).</p><p id="S1.p2">Visuals are also key to strategic, persuasive messaging. Compared to text, visuals appear closer to the truth, rendering them a ready means of proof and authentication (Barthes, <xref rid="bib.bibx6" ref-type="bibr">1981</xref>; Messaris &amp; Abraham, <xref rid="bib.bibx77" ref-type="bibr">2001</xref>). They communicate ideological propositions (Edwards &amp; Winkler, <xref rid="bib.bibx30" ref-type="bibr">1997</xref>), create positive images of political causes (Sontag, 2003), evoke emotional responses (Perlmutter, <xref rid="bib.bibx89" ref-type="bibr">1998</xref>), constitute cultural memories and meanings (McClancy, <xref rid="bib.bibx72" ref-type="bibr">2013</xref>; Trachtenberg, <xref rid="bib.bibx101" ref-type="bibr">1985</xref>), and mobilize supporters (Hariman &amp; Lucaites, <xref rid="bib.bibx47" ref-type="bibr">2007</xref>; Mattoni &amp; Teune, <xref rid="bib.bibx70" ref-type="bibr">2014</xref>).</p><p id="S1.p3">While manual coding methods have helped better understand visual messaging strategies for decades, the sheer number of digital images poses a challenge. By minimizing time and resources needed to manually code images, computational methods are more efficient. Unsupervised learning can help reveal visual themes and categories within big datasets via a bag-of-visual words model (Torres, <xref rid="bib.bibx100" ref-type="bibr">2023</xref>; Zhang &amp; Peng, <xref rid="bib.bibx121" ref-type="bibr">2022</xref>). Scholars have used such automated approaches to detect clusters in politicians’ social media posts (Joo &amp; Steinert-Threlkeld, <xref rid="bib.bibx53" ref-type="bibr">2022</xref>; Peng, <xref rid="bib.bibx86" ref-type="bibr">2021</xref>), smartphone screen activities (Muise et al., <xref rid="bib.bibx81" ref-type="bibr">2022</xref>), online protest images (Zhang &amp; Peng, <xref rid="bib.bibx121" ref-type="bibr">2022</xref>), ring-wing memes (Lokmanoglu et al., <xref rid="bib.bibx66" ref-type="bibr">2023</xref>), and migrant photographs (Torres, <xref rid="bib.bibx100" ref-type="bibr">2023</xref>). Supervised machine learning, by contrast, involves training an AI model using labeled visual data before examining similar imagery. Studies have used this approach to identify protest depictions (Zhang &amp; Pan, <xref rid="bib.bibx120" ref-type="bibr">2019</xref>), image-text mismatches (Ha et al., <xref rid="bib.bibx44" ref-type="bibr">2020</xref>), differentiations between state and protester violence (Steinert-Threlkeld et al., <xref rid="bib.bibx97" ref-type="bibr">2022</xref>), and objects and faces including their emotions, gender, age, and visual aesthetics (e.g., <xref rid="bib.bibx4" ref-type="bibr">Bakhshi &amp; Gilbert, 2015</xref>; <xref rid="bib.bibx26" ref-type="bibr">Dietrich &amp; Ko, 2022</xref>; <xref rid="bib.bibx85" ref-type="bibr">Peng, 2018</xref>; <xref rid="bib.bibx88" ref-type="bibr">Peng &amp; Yingdan, 2023</xref>). Nonetheless, computational methods also come with their own fair share of challenges, such as validation, algorithmic bias, and privacy concerns (Chen et al., <xref rid="bib.bibx16" ref-type="bibr">2024</xref>, <xref rid="bib.bibx19" ref-type="bibr">2024</xref>; Williams et al., <xref rid="bib.bibx107" ref-type="bibr">2020</xref>; Zou &amp; Schiebinger, <xref rid="bib.bibx123" ref-type="bibr">2018</xref>).</p><p id="S1.p4">Acknowledging that W. J. T. Mitchell’s (<xref rid="bib.bibx78" ref-type="bibr">1995</xref>) pictorial turn has now entered a computational phase, this article manually examines several thousand images that Islamist extremist groups distributed to provide insights for improving AI’s usefulness in understanding online extremist content. After highlighting the usefulness of visual framing practices and discussing key challenges facing computational visual analyses today, we explicate the study’s comparative methodology. Then, we lay out five lessons emergent from the comparison useful for improving visual computational methods. The study concludes with a discussion of how mixed visual methods of AI and manual coding can bring better understandings to the levels of visual framing.</p><sec id="S1.SS1">
        
        
        
        
      <title>Visual Framing</title><p id="S1.SS1.p1">Rodriguez and Dimitrova’s (<xref rid="bib.bibx91" ref-type="bibr">2011</xref>) four-tiered model is one of the most cited works in the literature on visual framing (Walter &amp; Ophir, <xref rid="bib.bibx104" ref-type="bibr">2024</xref>), as it moves beyond atheoretical or exclusively semiotic approaches to engage with how image selection, cropping, and editing can capture attention, evoke emotions, carry meanings, and influence perceptions (Coleman, <xref rid="bib.bibx23" ref-type="bibr">2010</xref>; Geise &amp; Baden, <xref rid="bib.bibx38" ref-type="bibr">2015</xref>). Their model disaggregates visual framing into four tiers. The denotative level constitutes meaning by identifying salient frames through who and what an image depicts based on the scene and surrounding texts (Schwalbe, <xref rid="bib.bibx94" ref-type="bibr">2006</xref>). Denotative frames can be context-specific (Goffman, <xref rid="bib.bibx40" ref-type="bibr">1974</xref>, <xref rid="bib.bibx41" ref-type="bibr">1979</xref>) or function as equivalency-based dyads, such as gain versus loss and war versus peace (Cacciatore et al., <xref rid="bib.bibx11" ref-type="bibr">2016</xref>). The semiotic level suggests social meanings by examining visual grammar and conventions, including camera angles, perceived viewer distance, body posture, eye contact, and facial expressions (Forgas &amp; East, <xref rid="bib.bibx36" ref-type="bibr">2008</xref>; Hall, <xref rid="bib.bibx45" ref-type="bibr">1966</xref>; Kress &amp; Leeuwen, <xref rid="bib.bibx57" ref-type="bibr">2006</xref>). The connotative tier considers abstract and figurative symbols (e.g., metaphors) in the shot that can “combine, compress and communicate social meaning” (Rodriguez &amp; Dimitrova, <xref rid="bib.bibx91" ref-type="bibr">2011</xref>, p. 96). Finally, the ideological level expands on the visual elements, stylistic choices, and symbols to provide a holistic interpretation that underscores the political, religious, economic and/or demographic underpinnings of the visual constructions. Combined, the four-tiered model allows for a nuanced understanding of visual messaging.</p><p id="S1.SS1.p2">Studies of visual media campaigns by Islamist extremist groups have examined all four levels of the 4-tiered model, albeit with less emphasis on semiotics. At the denotative level, for example, Al-Qaeda’s visual campaign encompassed predominantly militant visual frames, such as training, operations, and martyrdom (Farwell, <xref rid="bib.bibx33" ref-type="bibr">2010</xref>; Center, <xref rid="bib.bibx14" ref-type="bibr">2005</xref>), highlighting political Islamist connotations associated with objects like flags and swords (Coleman, <xref rid="bib.bibx22" ref-type="bibr">2006</xref>). At the semiotic level, studies of ISIS dissected several pictorial stylistic conventions, including viewer distance, camera angle, eye contact, facial expressions, point-of-view shots, and dynamic versus static imagery (e.g., <xref rid="bib.bibx50" ref-type="bibr">Impara, 2018</xref>; <xref rid="bib.bibx110" ref-type="bibr">Winkler et al., 2019</xref>). At the connotative level, ISIS and al-Qaeda utilized symbols like AK47s, the monotheism hand gesture, and depictions of death and dying to convey meaning (Wignell et al., <xref rid="bib.bibx106" ref-type="bibr">2017</xref>; Winkler et al., <xref rid="bib.bibx109" ref-type="bibr">2018</xref>). Ideologically, both al-Qaeda and ISIS espoused an extreme Islamist lens steeped in a clash of civilization narrative that promoted religious rule as an alternative to the nation state system (Ciovacco, <xref rid="bib.bibx21" ref-type="bibr">2009</xref>), but had differing views on the nature and timing of the Caliphate (Kuznar, <xref rid="bib.bibx60" ref-type="bibr">2015</xref>). Yet, existing literature on al-Qaeda and ISIS’s media campaigns, while providing nuanced understandings of the four visual framing tiers, has been mainly limited to manual coding.</p></sec><sec id="S1.SS2">
        
        
        
        
      <title>Computational Visual Analysis</title><p id="S1.SS2.p1">Computational visual analysis on its own is not yet capable of fully engaging the four-tier model of visual framing. Computational attempts identify some denotative and semiotic elements (e.g., humans, race, age, color, public figures, rifles, umbrellas, facial expressions) (Chen et al., <xref rid="bib.bibx17" ref-type="bibr">2022</xref>; Joo &amp; Steinert-Threlkeld, <xref rid="bib.bibx53" ref-type="bibr">2022</xref>; Muise et al., <xref rid="bib.bibx81" ref-type="bibr">2022</xref>; Zhang &amp; Peng, <xref rid="bib.bibx121" ref-type="bibr">2022</xref>). Yet, such analyses stop short of gauging key symbolic and ideological visual components of scenes and their contexts due to what Peng, Lock, and Salah (<xref rid="bib.bibx87" ref-type="bibr">2024</xref>) rightly argue is an automation-theoretical disconnect. This study addresses this gap by comparing manual and automated coding in the online extremism sphere in pursuit of a more effective, hybrid approach.</p><p id="S1.SS2.p2">Traditional computational image analysis typically relies on task-specific computer-vision architectures such as YOLO-style object detectors, Mask R-CNN segmentation models, or transformer-based vision–language models like BLIP-2 or Flamingo. These systems are designed to extract concrete visual features (e.g., objects, faces, scene layouts) and perform discrete tasks with high accuracy when trained on large, labeled datasets. However, they are not built to apply multi-layered interpretive schemas such as Rodriguez and Dimitrova’s four-tier model without extensive task-specific fine-tuning. Because our study evaluates GPT-4o in a zero-shot setting—asking a general-purpose vision-language model to follow a human-developed codebook—we position this approach as complementary to, rather than a replacement for, traditional CV pipelines.</p><p>AI changes related to the scanning, sampling, and quantization of visual images are rapidly evolving, rendering accurate summations of developments difficult (Zhang &amp; Dahu, <xref rid="bib.bibx122" ref-type="bibr">2019</xref>). Complicating the quickly changing terrain is that machines are now producing most images, often for other machines rather than the human eye (Paglen, <xref rid="bib.bibx83" ref-type="bibr">2019</xref>). Nonetheless, AI remains data-driven, rather than image-driven, meaning that understanding image-data relationships should remain a priority (Anderson, <xref rid="bib.bibx2" ref-type="bibr">2017</xref>). Whether computerized or not, visual cultures influence and are influenced by human biases in both production and consumption of online messaging (Bridle, <xref rid="bib.bibx8" ref-type="bibr">2023</xref>; Sezen, <xref rid="bib.bibx95" ref-type="bibr">2020</xref>).
Combining quantitative and qualitative analyses of big data can augment visual framing. Dondero, for example, maintains that large-scale quantitatively produced diagrams produced through quantitative and semiotic analysis can assist in identifying “contrasting areas, opposite areas, or superposing of images on the plane of expression” useful for further quantitative and qualitative analysis (Dondero, <xref rid="bib.bibx28" ref-type="bibr">2019</xref>, p. 140). Such a combined approach preserves the importance of visual context within a specified corpus and across the image components. In short, she advocates for scholars to use her process as a metavisual device for four reasons:</p><disp-quote><p>(1) these visualizations are images of images; (2) the parameters used to arrange them are visual; (3) the automatic distribution of the images is visualized spatially in a presentation governed by abscissas and ordinates; (4) the content analysis…remains within the realm of images (filiation, tradition, citation, genre, etc.) and not the abstract realm of verbal description (Dondero, <xref rid="bib.bibx28" ref-type="bibr">2019</xref>, p. 141).</p></disp-quote><p>Here, we agree with Dondero (<xref rid="bib.bibx28" ref-type="bibr">2019</xref>) about the value of computerized quantitative analysis for assisting qualitative results. However, we add that quantitative human coding analysis, combined with statistical assessments, can further strengthen the tracked meaning of results. We begin by asking:</p><list id="S1.I1"><list-item id="S1.I1.ix1">
              
              
              
            <p id="S1.I1.ix1.p1">How effectively can a vision-language model (GPT-4o) apply a
human-developed content-coding schema to Rodriguez and Dimitrova’s
four-tiered visual framing analysis?</p></list-item></list><p id="S1.SS2.p5">Large language models and NLP pipelines can efficiently process large text corpora, grouping semantically similar responses and surfacing recurring patterns. Gamieldien et al. (<xref rid="bib.bibx37" ref-type="bibr">2023</xref>) find that transformer-based tools can generate highly granular codes across thousands of student reflections, substantially reducing human labor. This aligns with earlier infrastructure-oriented work showing NLP can automatically classify predictable text categories. Similarly, Morgan (<xref rid="bib.bibx79" ref-type="bibr">2023</xref>) reports that ChatGPT performs well when themes are concrete and descriptive, requiring little interpretive inference. In hybrid interfaces, rule-based suggestions can be systematically extended to unseen data (Rietz &amp; Maedche, <xref rid="bib.bibx90" ref-type="bibr">2021</xref>), increasing agreement rather than replacing human interpretation and underscoring that automation is most reliable for patterned, literal, and structurally evident meaning.</p><p id="S1.SS2.p6">Despite scalability gains, current AI systems appear to underperform when meaning depends on tacit knowledge, ambiguity, or socio-cultural interpretations. Gamieldien et al. (<xref rid="bib.bibx37" ref-type="bibr">2023</xref>) note the need for AI researcher oversight when semantic nuance matters. Studies of thematic automation note that disagreement among humans themselves reflects interpretive pluralism (Armstrong et al., <xref rid="bib.bibx3" ref-type="bibr">1997</xref>; Mackieson et al., <xref rid="bib.bibx68" ref-type="bibr">2019</xref>) — something models are poorly equipped to resolve. Transformer architectures excel at long-range dependencies (Lakretz et al., <xref rid="bib.bibx61" ref-type="bibr">2020</xref>), but still primarily attend to textual features rather than framing context, affective tone, or symbolic cues. These limitations indicate that interpretive coding requires judgment beyond probabilistic associations, particularly in indexical, connotative, or historically situated domains.</p><p>Across studies, researchers express caution about fully delegating thematic interpretation to automated systems. Marathe and Toyama (<xref rid="bib.bibx69" ref-type="bibr">2018</xref>) report reluctance rooted in opacity, loss of theoretical accountability, and few questioning opportunities (Chen et al., <xref rid="bib.bibx18" ref-type="bibr">2018</xref>). Rietz and Maedche (<xref rid="bib.bibx90" ref-type="bibr">2021</xref>) similarly find researchers use automated suggestions not to accelerate coding, but to reflect on needed codebook refinements. Such reflection aligns with iterative qualitative traditions emphasizing continuous interpretation (e.g., <xref rid="bib.bibx7" ref-type="bibr">Braun &amp; Clarke, 2006</xref>; <xref rid="bib.bibx92" ref-type="bibr">Saldana, 2021</xref> cited in <xref rid="bib.bibx37" ref-type="bibr">Gamieldien et al., 2023</xref>). Thus, for tasks requiring contextual inference, socio-cultural reading, or interpretive framing, human coders continue to outperform computational models. Because visual framing often requires reading symbolism, composition, affect, and implied narratives, we ask:</p><list id="S1.I2"><list-item id="S1.I2.ix1">
              
              
              
            <p id="S1.I2.ix1.p1">Which dimensions of visual framing remain resistant to automation, and what do these limitations reveal about the strengths of human coding in visual analysis?</p></list-item></list></sec><sec id="S1.SS3">
        
        
        
        
      <title>Methodology</title><p id="S1.SS3.p1">To assess the effectiveness of GPT-4o for applying a human-developed content-coding schema to visual framing analysis, we began by conducting a human-coded content analysis of 7,292 images from al-Qaeda and ISIS’s English and Arabic magazines or newsletters distributed 2009-2020 (see Table <xref rid="S1.T1" ref-type="table">1</xref>). For al-Qaeda, the English issues included Inspire (1-17) and Jihad Recollections (1-4), and the Arabic issues were al-Masra (1-57). ISIS’s English issues included Dabiq (1-15) and Rumiyah (1-13), and al-Naba (1-229) in Arabic. All items were publicly available through Google, Jihadology (Zelin, <xref rid="bib.bibx118" ref-type="bibr">2021</xref>), or archive.org.</p><table-wrap id="S1.T1"><label>Table 1:</label><caption><title>Image Count in Al-Qaeda and ISIS Online Publications</title></caption>
          
          
          
          
        
<table>
<thead>
<tr>
<th>Group</th>
<th><p>Publications</p></th>
<th>Frequency</th>
<th>Percent</th>
<th>Total / % of group image count</th></tr>
</thead>
<tbody>
<tr>
<td>AQ</td>
<td><p>Jihadi Recollections</p></td>
<td>442</td>
<td>6.06</td>
<td>3466 (47.5%)</td></tr>
<tr>
<td />
<td><p>Inspire</p></td>
<td>1842</td>
<td>25.26</td>
<td /></tr>
<tr>
<td />
<td><p>Al Masra</p></td>
<td>1182</td>
<td>16.21</td>
<td /></tr>
<tr>
<td>ISIS</td>
<td><p>Dabiq</p></td>
<td>1391</td>
<td>19.08</td>
<td>3826 (52.5%)</td></tr>
<tr>
<td />
<td><p>Rumiyah</p></td>
<td>273</td>
<td>3.74</td>
<td /></tr>
<tr>
<td />
<td><p>Al Naba</p></td>
<td>2162</td>
<td>29.65</td>
<td /></tr>
<tr>
<td colspan="2">Total across groups</td>
<td>7292</td>
<td>100.0</td>
<td /></tr>
</tbody>
</table></table-wrap><p id="S1.SS3.p2">We utilized 13 expert coders from Egypt, Afghanistan, Turkey, Saudi Arabia, Syria, Poland, Vietnam, and the United States to create, refine, and utilize a visual analysis codebook. Our human coders had doctorates or graduate training in Communication Studies, Psychology, Political Science, and Education. The pilot phase involved three US and Egyptian coders who created coding categories inductively from images in Dabiq’s first issue until intercoder reliability was higher than 0.80 on Cohen’s kappa for all variables. Coders met weekly to identify and resolve discrepancies and cross-cultural differences that produced unacceptable reliability levels. In cases of disagreements, a bias toward the Middle Eastern perspective prevailed in line with the primary target audience. With a reliable codebook, 13 coders received oral and written training and analyzed each image in the dataset. The average intercoder reliability score using Cohen’s kappa across all categories was 0.91 (see Table <xref rid="S1.T2" ref-type="table">2</xref>). A third coder resolved discrepancies for statistical analysis.</p><table-wrap id="S1.T2"><label>Table 2:</label><caption><title>Intercoder reliability and description of the manual coding instrument</title></caption>
          
          
          
          
        
<table>
<thead>
<tr>
<th>Variable</th>
<th><p>Description of coding clusters &amp; categories</p></th>
<th>% Agree</th>
<th>Cohen’s <inline-formula><mml:math id="S1.T2.m1" alttext="\kappa" display="inline"><mml:mi mathsize="90%">κ</mml:mi></mml:math></inline-formula></th></tr>
</thead>
<tbody>
<tr>
<td colspan="4"><italic>Denotative categories</italic></td></tr>
<tr>
<td>Military role</td>
<td><p>Extremist militants; enemy militants; mixed; none</p></td>
<td>95.9</td>
<td>0.92</td></tr>
<tr>
<td>Death</td>
<td><p>About to die; dead bodies; not applicable</p></td>
<td>94.8</td>
<td>0.93</td></tr>
<tr>
<td>Flags</td>
<td><p>ISIS/AQ flag; U.S. flag; MENA country flag; other; mixed; not applicable</p></td>
<td>97.2</td>
<td>0.93</td></tr>
<tr>
<td>Humans</td>
<td><p>1 human; 2–10 humans; <inline-formula><mml:math id="S1.T2.m2" alttext="&gt;10" display="inline"><mml:mrow><mml:mi /><mml:mo mathsize="90%">&gt;</mml:mo><mml:mn mathsize="90%">10</mml:mn></mml:mrow></mml:math></inline-formula> humans; no humans</p></td>
<td>95.1</td>
<td>0.93</td></tr>
<tr>
<td>Destruction</td>
<td><p>Presence of fire, explosions, or other acts of destruction in process; destroyed buildings, bridges, or vehicles; not applicable</p></td>
<td>97.9</td>
<td>0.92</td></tr>
<tr>
<td>Leaders</td>
<td><p>Jihad leaders; Western leaders; Arab state leaders; Asian/Russian leaders; other Muslim leaders; African leaders; mixed; none</p></td>
<td>97.4</td>
<td>0.91</td></tr>
<tr>
<td colspan="4"><italic>Semiotic factors</italic></td></tr>
<tr>
<td>Viewer position</td>
<td><p>Viewer looking up at photo subject; looking down; eye level; not applicable</p></td>
<td>95.6</td>
<td>0.88</td></tr>
<tr>
<td>Facial expressions</td>
<td><p>Positive; negative; unclear; not applicable</p></td>
<td>95.0</td>
<td>0.89</td></tr>
<tr>
<td>Stance</td>
<td><p>On knees (not praying); sitting; standing; laying down; praying (on knees or bending over); mixed; not applicable</p></td>
<td>97.2</td>
<td>0.96</td></tr>
<tr>
<td>Image position</td>
<td><p>Foreground; background</p></td>
<td>97.2</td>
<td>0.86</td></tr>
<tr>
<td>Viewer distance</td>
<td><p>Intimate/personal distance (<inline-formula><mml:math id="S1.T2.m3" alttext="&lt;4" display="inline"><mml:mrow><mml:mi /><mml:mo mathsize="90%">&lt;</mml:mo><mml:mn mathsize="90%">4</mml:mn></mml:mrow></mml:math></inline-formula> ft.); social/public distance (<inline-formula><mml:math id="S1.T2.m4" alttext="&gt;4" display="inline"><mml:mrow><mml:mi /><mml:mo mathsize="90%">&gt;</mml:mo><mml:mn mathsize="90%">4</mml:mn></mml:mrow></mml:math></inline-formula> ft.); mixed; unknowable (e.g., infographics, posters, maps, banners, or non-photographic images)</p></td>
<td>89.4</td>
<td>0.87</td></tr>
<tr>
<td>Eye contact</td>
<td><p>Photo subject looks directly at viewer; looks up; looks down; looks away; not looking; not applicable</p></td>
<td>95.5</td>
<td>0.92</td></tr>
<tr>
<td colspan="4"><italic>Connotative factors</italic></td></tr>
<tr>
<td>Governance</td>
<td><p>State building (e.g., social services, market, currency, passports, maps, natural landscape); law enforcement; allegiance pledges; media/public information; mixed; not applicable</p></td>
<td>97.7</td>
<td>0.91</td></tr>
<tr>
<td>About to die</td>
<td><p>Certain death; possible death; presumed death; none</p></td>
<td>97.3</td>
<td>0.94</td></tr>
<tr>
<td>Religion</td>
<td><p>Tawhid gesture; reading Qur’an/Qur’anic texts; individual prayer; Hajj; religious iconography or shrines; mixed; none</p></td>
<td>98.3</td>
<td>0.88</td></tr>
</tbody>
</table></table-wrap><fig id="S1.F1"><label>Figure 1:</label><caption><title>Historical Overview of Study’s Al-Qaeda and ISIS Publications</title></caption>
          
          
          
          
        <graphic xlink:href="figures/figure1.svg" /></fig><p id="S1.SS3.p3">We sorted our human coding categories into four levels of visual framing. While Rodriguez and Dimitrova (<xref rid="bib.bibx91" ref-type="bibr">2011</xref>) note that the four tiers can overlap, we focused on the level that three coders considered most suited to the images’ characteristics. The meaning of denotative elements (or objects that bore an indexical relationship with an individual, thing, or place) involves two interrelated processes. First, the coding process accounts for elements the viewers can see in the shot, rendering them “closer to the truth than other forms of communication” (Messaris &amp; Abraham, <xref rid="bib.bibx77" ref-type="bibr">2001</xref>, p. 217). Fully gauging the denotative meaning, however, requires generating frames that derive from a second process that looks not only at the textual context (Rodriguez &amp; Dimitrova, <xref rid="bib.bibx91" ref-type="bibr">2011</xref>), but also into the syntactic relationships between images. Compared to words, images lack an explicit prepositional syntax, or the ability to propagate clear causal relationships, similarities, or other forms of connections (Messaris &amp; Abraham, <xref rid="bib.bibx77" ref-type="bibr">2001</xref>). Our denotative categories included military fighters, death, humans, leaders, flags, and destruction.</p><p id="S1.SS3.p4">The second level of semiotics (or stylistic, technical, and subject conventions) focuses on “signs and symbols, sign systems, and sign processes” (Moriarty, <xref rid="bib.bibx80" ref-type="bibr">2002</xref>, p. 20). Visual semiotics fulfills three meta functions: compositional, representative, and interactive (Kress &amp; Leeuwen, <xref rid="bib.bibx58" ref-type="bibr">1996</xref>, <xref rid="bib.bibx57" ref-type="bibr">2006</xref>), as it emphasizes four types of signification: arbitrary (by convention), memetic (by iconic representations), evidential (by cues and codes), and signaling (by recognition) (Moriarty, <xref rid="bib.bibx80" ref-type="bibr">2002</xref>). Metaphors or metaphorical thinking (Feng &amp; O’Halloran, <xref rid="bib.bibx34" ref-type="bibr">2013</xref>), and visual metonymics linked to abstract concepts and objects/events are also associated with semiotics (Feng, <xref rid="bib.bibx35" ref-type="bibr">2017</xref>). Accordingly, our semiotic categories included viewer position, image position, viewer distance, eye contact, and facial expressions.</p><p id="S1.SS3.p5">The third framing level of connotation involves visual symbols linked to ideas or concepts associated with individuals, things, or places. Turner insists symbology draws its data from “cultural genres or sub systems of expressive culture…as well as narrative genres, such as myth, epic, ballad, the novel and ideological systems [and t]hey would also include non-verbal forms” (Turner, <xref rid="bib.bibx103" ref-type="bibr">1979</xref>, p. 12). Symbols can allude to power, authority, faith, rituals, and death, among others, to achieve personal or group goals (Turner, <xref rid="bib.bibx102" ref-type="bibr">1974</xref>). At the iconographical level, visual symbols go beyond the depicted object or person to connote ideas and concepts; they begin to reveal ideological meanings derived from backgrounds and the surrounding context (Panofsky, <xref rid="bib.bibx84" ref-type="bibr">1955</xref>; Leeuwen, <xref rid="bib.bibx62" ref-type="bibr">2001</xref>). Our connotative categories included about to die, religion, and state, with the latter disaggregated into state-building, law enforcement, allegiance pledges, and media propaganda for a more fine-grained analysis (see Appendix <xref rid="A1" ref-type="sec">A</xref> for category meanings; Table <xref rid="A1.T1" ref-type="table">A1</xref> for examples).</p><p id="S1.SS3.p6">We removed several manual coding categories from our computational model. We excluded image size and position because the workflow operated on individual images rather than publication layouts. We omitted gender because the overwhelming number of individuals displayed were male, with females only as occasional outliers. We removed age and social infrastructure as neither produced significant results across more than a dozen papers addressing how message strategies intersected with situational factors.</p><p id="S1.SS3.p7">To compare manual content coding with AI visual coding, we built a lightweight, fully reproducible inference pipeline that (1) ingested image files, (2) encoded each image and sent it to GPT-4o together with a structured labeling prompt derived from our codebook, and (3) compiled the model’s outputs into a standardized dataset for evaluation against human annotations. The pipeline did not fine-tune model weights; it relied on constrained prompting and schema validation to ensure consistent, interpretable results (see Figure 2).</p><fig id="S1.F2"><label>Figure 2:</label><caption><title>AI Visual Coding Inference Pipeline</title></caption>
          
          
          
          
        <graphic xlink:href="figures/Figure2.svg" /></fig><sec id="S1.SS3.SSS1">
          
          
          
          
        <title>Step 1: Image Preparation and Alignment</title><p id="S1.SS3.SSS1.p1">All images were extracted from the full corpus of al-Qaeda and ISIS publications and matched to their manual coding entries. Each file was saved using a standardized naming convention that included publication, issue number, and image number (e.g., D_12_04), allowing a one-to-one linkage between visual material and its metadata to ensure tracking capacity back to images’ sources and manually coded attributes.</p></sec><sec id="S1.SS3.SSS2">
          
          
          
          
        <title>Step 2: Preprocessing and Encoding</title><p id="S1.SS3.SSS2.p1">The repository contained PNG, JPG, JPEG, GIF, BMP, and WEBP formats. Images were maintained at their original resolution without resizing or alteration of embedded metadata to preserve visual detail integrity. Each image was converted into a text-based data format through base64 encoding, which allows secure transmission of visual information as text while retaining the original pixel structure (Lokmanoglu &amp; Walter, <xref rid="bib.bibx67" ref-type="bibr">2025</xref>). Detailed logs documented each image’s processing status to ensure completeness and traceability.</p></sec><sec id="S1.SS3.SSS3">
          
          
          
          
        <title>Step 3: Model Inference and Structured Prompting</title><p id="S1.SS3.SSS3.p1">We provided each encoded image directly to GPT-4o (OpenAI, <xref rid="bib.bibx82" ref-type="bibr">2024</xref>) along with a structured prompt adapted from the visual framing codebook. The prompt specified the exact coding categories (e.g., military role, human figures, eye contact, state-building, and religious symbolism) and required the model to classify each image according to those predefined labels (see Appendix <xref rid="A2" ref-type="sec">B</xref>). Category definitions were embedded in the prompt to guide consistent decision-making. The model’s responses were constrained to a fixed output schema composed of numeric identifiers corresponding to each category.</p><p id="S1.SS3.SSS3.p2">This study employed a zero-shot prompting design. The model received the structured labeling prompt and visual input simultaneously without prior exposure, calibration subset, or iterative refinement. The objective was not to train or improve GPT-4o’s performance but to evaluate how an off-the-shelf vision-language model applied an existing human content-coding schema. Each API call contained a single image and the associated codebook prompt without captions, metadata, or textual context.</p><p id="S1.SS3.SSS3.p3">To ensure the AI response integrity, we monitored outputs for potential misclassifications or omissions related to sensitive or graphic imagery. System logs were reviewed after each coding run to identify any uncoded images flagged as indeterminate or potentially due to ethical safeguards. No systematic filtering or suppression of graphic content occurred, but the logging process allowed for post hoc verification should future discrepancies arise.</p></sec><sec id="S1.SS3.SSS4">
          
          
          
          
        <title>Step 4: Output Aggregation and Reliability Assessment</title><p id="S1.SS3.SSS4.p1">The model’s outputs were compiled into a unified dataset and compared with the manual coding results using standard reliability and performance metrics, including precision (the proportion of correct positive predictions), recall (the proportion of actual positives correctly identified), and F1-score (a recall-weighted harmonic mean of precision and recall). We also report F2-scores, macro-averaged and per-class metrics, as well as two measures of intercoder reliability between AI and human coders. Reports of percent agreement served as an intuitive measure of alignment.</p><p id="S1.SS3.SSS4.p2">Following automated content analysis research standards, F1 scores above 0.80 indicated excellent agreement between AI and human coding, scores between <inline-formula><mml:math id="S1.SS3.SSS4.p2.m1" alttext="0.70-0.80" display="inline"><mml:mrow><mml:mn>0.70</mml:mn><mml:mo>−</mml:mo><mml:mn>0.80</mml:mn></mml:mrow></mml:math></inline-formula> represented acceptable performance, and scores between <inline-formula><mml:math id="S1.SS3.SSS4.p2.m2" alttext="0.60-0.70" display="inline"><mml:mrow><mml:mn>0.60</mml:mn><mml:mo>−</mml:mo><mml:mn>0.70</mml:mn></mml:mrow></mml:math></inline-formula> suggested moderate agreement (Burscher et al., <xref rid="bib.bibx10" ref-type="bibr">2014</xref>; Chan et al., <xref rid="bib.bibx15" ref-type="bibr">2021</xref>). For Cohen’s <inline-formula><mml:math id="S1.SS3.SSS4.p2.m3" alttext="\kappa" display="inline"><mml:mi>κ</mml:mi></mml:math></inline-formula>, values above 0.80 were considered reliable, while values between 0.67-0.80 were deemed acceptable for exploratory research (Krippendorff, <xref rid="bib.bibx59" ref-type="bibr">2018</xref>). These benchmarks were particularly relevant given that human inter-coder reliability in visual content analysis typically ranges from 0.70-0.85 (Song et al., <xref rid="bib.bibx96" ref-type="bibr">2020</xref>).</p></sec></sec><sec id="S1.SS4">
        
        
        
        
      <title>Findings</title><p id="S1.SS4.p1"><bold>RQ1</bold> asked how effectively a vision-language model (GPT-4o) could apply a human-developed content-coding schema to Rodriguez and Dimotrova’s four tiers of visual framing. The AI coding performance differed in substantial ways across variables and framing tiers (see Table <xref rid="S1.T3" ref-type="table">3</xref> and Appendix <xref rid="A2" ref-type="sec">B</xref> Table <xref rid="A2.T1" ref-type="table">B1</xref>). Denotative variables showed the strongest alignment between AI and human coding. Humans, Destruction, Leaders, and Flags demonstrated the most consistent performance across metrics. Humans yielded an F1 of 0.80 and a Balanced Accuracy of 0.83. Destruction performed similarly (F1 = 0.79, Balanced Accuracy = 0.83). Leaders and Flags showed moderately strong performance, with F1 values generally ranging from 0.70 to 0.78 and Balanced Accuracy typically above 0.75. Krippendorff’s <inline-formula><mml:math id="S1.SS4.p1.m1" alttext="\alpha" display="inline"><mml:mi>α</mml:mi></mml:math></inline-formula> values for these variables were moderately reliable, ranging from approximately 0.40 to 0.60. As shown in Figure <xref rid="A2.F1" ref-type="fig">B1</xref>, these categories were dominated by absent cases, inflating agreement due to prevalence rather than consistent recognition of positive cases. Weighted F1 values were very close to F1 for denotative variables, suggesting that class imbalance had limited influence on AI performance in this category.</p><p id="S1.SS4.p2">Denotative variables requiring additional contextual interpretation showed more modest alignment. Military Role achieved an F1 of 0.38, a Balanced Accuracy of <inline-formula><mml:math id="S1.SS4.p2.m1" alttext="0.73" display="inline"><mml:mn>0.73</mml:mn></mml:math></inline-formula>, and a Krippendorff’s <inline-formula><mml:math id="S1.SS4.p2.m2" alttext="\alpha" display="inline"><mml:mi>α</mml:mi></mml:math></inline-formula> around 0.30, indicating reliability for distinguishing between combatants and non-combatants below acceptable thresholds. Death showed somewhat higher performance (F1 = 0.52, Balanced Accuracy = 0.65), although still below that of other denotative variables. For both Military Role and Death, weighted F1 scores were notably higher than F1, indicating that performance was disproportionately driven by the majority “not applicable” class and the model struggled to identify less frequent positive cases.</p><p id="S1.SS4.p3">Semiotic variables requiring interpretations of bodily, expressive, or relational cues showed weaker correspondence. Viewer Position, Viewer Distance, Eye Contact, Stance, and Facial Expression produced F1 values generally 0.60-0.72, Balanced Accuracy values 0.72- 0.76, and Krippendorff’s <inline-formula><mml:math id="S1.SS4.p3.m1" alttext="\alpha" display="inline"><mml:mi>α</mml:mi></mml:math></inline-formula> around 0.20 to 0.40, falling below the 0.67 acceptable threshold for exploratory research. Across all semiotic variables, Balanced Accuracy consistently exceeded F1, indicating that while the model could distinguish broad classes, it did not reliably identify positive instances. Weighted F1 values were consistently higher than F1, confirming that the predominance of absent codes influenced performance. These semiotic findings suggest that discerning gaze direction, bodily posture, or facial expression requires contextual and relational sensitivity that current image models do not robustly encode.</p><p id="S1.SS4.p4">Connotative variables tied to symbolic or ideological meanings showed the weakest correspondence. State-building performed poorly (F1 <inline-formula><mml:math id="S1.SS4.p4.m1" alttext="\approx" display="inline"><mml:mo>≈</mml:mo></mml:math></inline-formula> 0.24, Balanced Accuracy <inline-formula><mml:math id="S1.SS4.p4.m2" alttext="\approx" display="inline"><mml:mo>≈</mml:mo></mml:math></inline-formula> 0.43, Krippendorff’s <inline-formula><mml:math id="S1.SS4.p4.m3" alttext="\alpha\approx" display="inline"><mml:mrow><mml:mi>α</mml:mi><mml:mo>≈</mml:mo><mml:mi /></mml:mrow></mml:math></inline-formula> 0.05 to 0.10), indicating minimal reliability. About to Die and Religion displayed comparably low performance, with <inline-formula><mml:math id="S1.SS4.p4.m4" alttext="\alpha" display="inline"><mml:mi>α</mml:mi></mml:math></inline-formula> values often approaching zero. Weighted F1 values for connotative variables were only slightly higher than F1, suggesting that performance limitations stemmed not only from class imbalance but also from the framing elements’ conceptual and inferential nature. Connotative categories contained very few positive cases, limiting model exposure and contributing to low performance (see Figure <xref rid="A2.F1" ref-type="fig">B1</xref>). While the model detected overt symbolic cues, it struggled with implicit or context-dependent signals of state-building, martyrdom, or religious practice that require interpretive inference beyond visual pattern matching.</p><table-wrap id="S1.T3"><label>Table 3:</label><caption><title>F1 and F2 scores, balanced accuracy, percent agreement, Krippendorff’s alpha, and Cohen’s kappa for each visual framing variable</title></caption>
          
          
          
          
        <table-wrap-foot><p>!</p></table-wrap-foot></table-wrap><p id="S1.SS4.p5"><bold>RQ2</bold> asked which dimensions of visual framing remain resistant to automation, and what these limitations reveal about the strengths of human coding in visual analysis.</p><p id="S1.SS4.p6"><italic>Lesson #1: Denotative Interactions</italic></p><p id="S1.SS4.p7">Despite AI’s stronger performance at the denotative level, inaccuracies and omissions were present. Military roles, for example, showed only modest agreement levels, as manual coders examined taglines and accompanying text to distinguish al-Qaeda and ISIS militants from enemy fighters. Without these textual cues, AI required more training on choice of uniforms or human intervention. Additionally, AI could detect ISIS’s objects like coins, outdoor markets, and competing currencies, but required researchers to recognize their connotative meaning, such as ISIS’s desired frame of economic independence. Similarly, AI could detect bottles of alcohol, drugs, and cigarettes (see Figure <xref rid="S1.F3" ref-type="fig">3</xref>), but required human coders to recognize them as critical components of ISIS’s moral policing apparatus. In short, the addition of human coding can render computational methods less likely to miss key objects or elements in big datasets and more likely to properly assess their functions.</p><p id="S1.SS4.p8">Another key contribution of manual coding to AI processing involves aggregation of denotative elements. Our AI learning approaches generated disparate visual elements, such as militants fighting or training, raised index fingers, swords, maps, doctors treating patients, and sunsets, etc. Human coding, however, captured subtle visual relationships that revealed broader themes of military prowess and state-building, identified how relationships created cohesive narratives, showed how symmetry, repetition or alignment influenced viewer interpretations, and unveiled the visual syntax strategy. For example, human coders identified the interrelationship between images of beheadings, piles of cigarettes, and checkpoints as ISIS’s law enforcement apparatus, complete with punishments for alleged spies, the moral policing apparatus, and the access points for determining who could enter the caliphate. Entman’s framing associations were useful to unlock the messaging strategy. The visual law enforcement frame communicated that sins were rampant (problem definition) because of people smuggling contraband, committing treason, and ignoring Islamic rules (causal interpretation), which stained the society (moral evaluation), thus requiring punishments and crackdowns on contraband to ensure community safety (treatment recommendation). An unsupervised learning computational approach on its own would not fully reveal the visual narrative.</p><fig id="S1.F3"><label>Figure 3:</label><caption><title>Photo from the 10th issue of Dabiq magazine showing ISIS’s hisba agents burning cigarettes and alcohol – Released July 2015</title></caption>
          
          
          
          
        <graphic xlink:href="figures/Figure3.svg" /></fig><p id="S1.SS4.p9"><italic>Lesson #2: Semiotic Interactions</italic></p><p id="S1.SS4.p10">For the most part, AI performed poorly on visual semiotic elements, rendering human coding highly valuable in this domain. With Krippendorff’s <inline-formula><mml:math id="S1.SS4.p10.m1" alttext="\alpha" display="inline"><mml:mi>α</mml:mi></mml:math></inline-formula> consistently lower than .67 between AI and human coding for viewer distance, eye contact, body stance, and facial expressions, the use of supervised AI was neither efficient nor cost-effective in analyzing the semiotics of al-Qaeda and ISIS’s images. Human training and validation, however, could help refine steps for identifying more useful, valid, and efficient processing of visual semiotics patterns. One example is the use of direct eye contact because it conveys dominance (Appendix <xref rid="A1" ref-type="sec">A</xref>). From 2016 to 2018, ISIS used direct eye contact as a frequent visual tactic, but the use of the strategy differed based on whether the depicted subject was a friend or foe and if the language of publication was English or Arabic. Yet, supervised training for AI only produced a .54 Krippendorf’s <inline-formula><mml:math id="S1.SS4.p10.m2" alttext="\alpha" display="inline"><mml:mi>α</mml:mi></mml:math></inline-formula> with manual coding. Intercoder reliability difficulties related to eye contact suggest that a carefully constructed, rule-based definition of what constitutes eye contact and avoidance would be necessary to reduce noise in the AI extraction process.</p><p id="S1.SS4.p11">Human coding could also help narrow categories down to coding options most useful for understanding visual strategies. For example, semiotic understandings of viewer distance focus on four categories: intimate, personal, public, and social (Jewitt &amp; Oyama, <xref rid="bib.bibx52" ref-type="bibr">2008</xref>). Each conveys specific meanings associated with standard human interactions (e.g., intimate distances associated with distraught photo subjects; photo subjects shot at a public distance conveying group rather than individual identities). However, human coding and statistical analysis revealed that significant findings primarily appeared only after combining intimate and personal distance (with photo subjects photographed from less than four feet) and comparing them with the combined categories of public and social distance (i.e., greater than four feet). Such findings help avoid overlooking important patterns that could be missed with strict adherence to viewer distance’s four standards.</p><p id="S1.SS4.p12"><italic>Lesson #3: Connotative Interactions</italic></p><p id="S1.SS4.p13">While computational coding could efficiently identify key symbols, human content expertise helped interpret needed cultural genres and subsystems. The black flag, for example, is a transhistorical object al-Qaeda and ISIS used as a symbol of adherence to Islam and the Prophet Muhammad’s path. It typically features the words “No God but Allah” with a white circle beneath carrying the words “Muhammad is Allah’s Messenger.” Yet, al-Qaeda often deviated from standard black flag depictions, also featuring white and other emblems used in Afghanistan and elsewhere (see Figure <xref rid="S1.F4" ref-type="fig">4</xref>). Without such insights, computational extractions of flags as denotative elements would miss other symbolic variations essential for understanding the nuance of extremist groups’ visual messaging. Another frequent symbol in al-Qaeda and ISIS photographs was militants raising their index fingers. The diverse makeup of our manual coding team, including Muslim researchers, identified the gesture as part of Islamic culture, connoting monotheism and the dedication of deeds to the one God. Culturally informed, human content expertise was instrumental for properly training and validating computational visual analyses to generate insightful media campaign understandings.</p><fig id="S1.F4"><label>Figure 4:</label><caption><title>Photo from the 17th issue of Al-Masra newspaper showing a militant holding a white flag with the same text that appears on the black banner – Released July 2016</title></caption>
          
          
          
          
        <graphic xlink:href="figures/Figure4.svg" /></fig><p id="S1.SS4.p14">When collaborating with AI, another beneficial area for human coders is adding useful insights about tropes and other visual commonplaces. A prime example in al-Qaeda and ISIS’s media was the excessive reliance on the about to die visual trope, which appeared in 75 percent of their images. Yet, AI missed many instances of the trope. About to die images assume three forms: presumed (i.e., showing implements of death like weapons and destruction), possible (i.e., showing photo subjects potentially dying without a confirmation their death occurred), or certain (i.e., showing photo subjects with accompanying text confirming their deaths) (Zelizer, <xref rid="bib.bibx119" ref-type="bibr">2010</xref>). An unsupervised learning method would likely group all three forms under a military visual frame, hence not distinguishing between the three constructs nor accounting for the groups’ emphasized use of the three disaggregated strategies. Instead, breaking down each form into its own core denotative components could facilitate training and validation. Labeled data for supervised learning could account for objects or elements, such as blood, swords, knives, guns, AK-47S, tanks, armored vehicles, rockets, ammunition, fire, smoke, explosions, and sniper crosshairs. Human coding could then complement the computational analysis by grouping the three about to die clusters, each with its own unique viewer interactions (Zelizer, <xref rid="bib.bibx119" ref-type="bibr">2010</xref>).</p><p id="S1.SS4.p15"><italic>Lesson #4: Ideological Interactions</italic></p><p id="S1.SS4.p16">Increasingly, scholars have recognized the linkages between ideology and discourse. Fabiszak et al. define ideology as “systems of beliefs, shared by a social group, with the power to evaluate and explain the social world” (Fabiszak et al., <xref rid="bib.bibx32" ref-type="bibr">2021</xref>, p. 409). McGee adds that “ideology in practice is a political language, preserved in rhetorical documents, with the capacity to dictate decision and control public belief and behavior” (McGee, <xref rid="bib.bibx73" ref-type="bibr">1980</xref>, p. 5). Van Dijk (<xref rid="bib.bibx27" ref-type="bibr">1998</xref>) explains that ideologies can influence the human mind’s cognitive structures. McGee (<xref rid="bib.bibx73" ref-type="bibr">1980</xref>) posits that a full set of ideological propositions can be summed up in a single term, while Edwards and Winkler (<xref rid="bib.bibx30" ref-type="bibr">1997</xref>) extend such reasoning to a small set of visual images.</p><p id="S1.SS4.p17">Human coding can help AI users distinguish between visual markers of culture and other images not performing ideological functions. One example of how this process could work concerns ideographs. In his study of American culture, McGee (<xref rid="bib.bibx73" ref-type="bibr">1980</xref>) defines a small subset of positive and negative words as ideographs (e.g., freedom, liberty, slavery, and terrorism). They serve as ordinary terms in political discourse, have abstract meanings that allow for collective commitment, warrant the use of power, guide behavior, and have culture-bound meanings. Consideration of the ideograph’s characteristics within social groups can assist AI users in narrowing large corpuses of visual images to specific objects that serve as cultural markers. With al-Qaeda and ISIS, for example, one visual ideological code is the group’s display of the monotheism gesture, whereby Muslims point their index fingers upwards toward heaven to connote their adherence to Allah. Al-Qaeda and ISIS frequently utilize images of the same gesture to signal potential Muslim recruits sympathetic to their groups’ causes.</p><p id="S1.SS4.p18">However, a key function of human coders in the AI training process involves understanding the interactions of visual elements of ideographs within an image. Returning to the monotheism gesture example, AI would be able to scrape all images showing humans pointing their index finger upward. Such an approach, absent human coders’ insights, would initially yield many images of Muslims demonstrating their Islamic faith with no affiliation with extremism. AI might also retrieve images of athletes or other individuals raising their index finger to signify success and victory (see Figures <xref rid="S1.F5" ref-type="fig">5</xref> &amp; <xref rid="S1.F6" ref-type="fig">6</xref>). Rather than confound the results with too much noise to produce meaningful results, AI trainers could refine the scraping process by asking for the gesture along with the presence of militants, males, direct eye contact in photo subjects, personal distance, black flags, and number of humans, as each of these variables have a documented relationship with the monotheism gesture in extremism photographs (Winkler, <xref rid="bib.bibx112" ref-type="bibr">2022</xref>). By considering element constellations rather than single objects or elements, the dataset will become more accurate in discerning ideological frames.</p><fig id="S1.F5"><label>Figure 5:</label><caption><title>Photo from the 1st issue of Rumiyah magazine showing militants before an attack in Iraq, one of whom (on the right) signaling the monotheism gesture – Released September 2016</title></caption>
          
          
          
          
        <graphic xlink:href="figures/Figure5.svg" /></fig><fig id="S1.F6"><label>Figure 6:</label><caption><title> Photo showing two indoor soccer players from the Moroccan national team celebrating by signaling the monotheism gesture – Released by Equipe du Maroc/Facebook September 2024</title></caption>
          
          
          
          
        <graphic xlink:href="figures/Figure6.svg" /></fig><p id="S1.SS4.p19"><italic>Lesson #5: Image-Context Interactions</italic></p><p id="S1.SS4.p20">Context is a multifaceted resource involving a myriad of forms that can influence interpretations of texts, whether discursive or nondiscursive (Linell, <xref rid="bib.bibx63" ref-type="bibr">1998</xref>). It both shapes and is shaped by its textual interactions. As McGee notes, “Failing to account for ‘context,’ or reducing ‘context’ to one or two of its parts, means quite simply that one is no longer dealing with discourse as it appears in the world” (McGee, <xref rid="bib.bibx74" ref-type="bibr">1990</xref>, p. 283).</p><p id="S1.SS4.p21">The complicated interactions between images and contexts in relation to al-Qaeda and ISIS emphasized the need for human coders to supplement AI for meaningful, efficient messaging processing. To begin, human coding aided in the identification of image codes that corresponded to changes in context factors over time. As Appendix <xref rid="A3" ref-type="sec">C</xref>, Table <xref rid="A3.T1" ref-type="table">C1</xref> shows, 18 of 30 variables in our manual codebook bore a significant relationship to context variables (e.g., troop withdrawal announcements, online account suspensions, attack lethality, etc.). Those relationships suggest a productive, efficient training regimen for AI, as the context factors are revealing about the groups’ response interactions over time. The remaining coding categories, while potentially useful for assessing image meaning or relationships with other images, did not significantly change in frequency over time, making them less of an AI priority.</p><p id="S1.SS4.p22">Human coding can also help verify the appropriate AI scope for assessing impact of context variables on strategic image use. The table reveals that the relationships with image strategies vary according to the context factor under consideration. As a result, the outputs of human coding point to context variables that need verification checks prior to premature conclusions that any single context variable is responsible for shifts in the image characteristics. For example, since censorship of militant group online accounts, announcements of anticipated troop withdrawals, and the relative rise in standing of competing ideological groups all relate to significant changes in the use of photo subject distance, users of AI should consider whether each of these context factors overlap during the timespan under evaluation before drawing any conclusions about the potential influence of any single context element.</p><p id="S1.SS4.p23">Human coding can also yield insights regarding appropriate AI disaggregation of context elements in relation to image characteristics over time. For example, our assessment of leader loss based on human coding revealed that the deaths of different levels and types of ISIS leaders resulted in different corresponding changes in the group’s image characteristics. Political and military leader deaths corresponded to different image changes. Leaders at different statuses within the media hierarchy corresponded to different changes in visual strategies. Thus, fine-tuned human coding analysis can aid in the development of a more robust, efficient AI system for analyzing the image-context strategies of groups like al-Qaeda and ISIS.</p></sec><sec id="S1.SS5">
        
        
        
        
      <title>Conclusion</title><p id="S1.SS5.p1">This analysis demonstrates the advantages of having humans and AI functioning together to understand the visual framing of extremist group messaging. Human coding yielded benefits on the retrieval, analysis, and validation of results related to denotative, semiotic, connotative, and ideological framing. It also aided in understanding how AI-human interactions can maximize text-context relationships. Yet, validation checks between the human and AI coding revealed that the percent agreement levels on coding categories varied considerably, suggesting the need for a more robust AI training process, particularly on subjective variables, like facial expressions and perceived distance, and identifying visual constellations linked to connotative and ideological framing.</p><p id="S1.SS5.p2">Examining the extremist visual context provided an ideal opportunity to test and compare manual and computational coding for MENA-based violent groups, but it does not necessarily apply to other types of violent protest visuals. The generalizability of the findings derived from al-Qaeda and ISIS photographic campaigns should be tested in relation to other forms of political violence (e.g., electoral protests, climate activism, and public vigils). Future studies should determine if the reliability of AI visual framing is transferable to these other settings.</p><p id="S1.SS5.p3">Additionally, variables with inherently nuanced or subjective definitions, such as stance or eye contact, pose significant challenges for consistent annotation. These complexities are reflected in the low recall and F1-scores observed in these categories, as the AI model struggles to align with human coders’ interpretations. The limitations of computational methods in capturing subtle cultural or contextual cues further exacerbate these discrepancies, particularly for categories like state, religion, and impending death that rely heavily on contextual understandings. Future studies should examine alternative training protocols to maximize efficient and reliable extraction processes.</p><p id="S1.SS5.p4"><bold>Data Availability:</bold> Replication materials for this study are hosted on the Open Science Framework (OSF). Due to the presence of violent and potentially harmful imagery, the image data are archived as a restricted-access component on OSF and may be accessed upon request, subject to review and approval: https://osf.io/r9tjm, project DOI: 10.17605/OSF.IO/R9TJM. All non-sensitive replication materials—including the coding instrument, model prompts, variable definitions, documentation, and analysis scripts—are publicly available on OSF and are also mirrored on GitHub for ease of access and version control: https://github.com/aysedeniz09/Visual-Framing-in-the-AI-Era.</p></sec></sec>
    </body>
  <back>
    <ref-list><title>References</title>
      <ref id="bib.bibx1"><mixed-citation publication-type="journal"><string-name><surname>Abdelrahim</surname>, <given-names>Y.</given-names></string-name> (<year>2019</year>). <article-title>Visual Analysis of ISIS Discourse Strategies and Types in Dabiq and Rumiyah Online Magazines</article-title>. <source><italic>Visual Communication Quarterly</italic></source>, <volume>26</volume>(<issue>2</issue>), <fpage>63</fpage>–<lpage>78</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/15551393.2019.1586546">https://doi.org/10.1080/15551393.2019.1586546</ext-link></mixed-citation></ref>
      <ref id="bib.bibx2"><mixed-citation publication-type="journal"><string-name><surname>Anderson</surname>, <given-names>C.</given-names></string-name> (<year>2017</year>). <article-title>Data-first manifesto: Shifting priorities in scholarly communications</article-title>. <source><italic>Information Services and Use</italic></source>, <volume>37</volume>(<issue>3</issue>), <fpage>335</fpage>–<lpage>342</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3233/ISU-170852">https://doi.org/10.3233/ISU-170852</ext-link></mixed-citation></ref>
      <ref id="bib.bibx3"><mixed-citation publication-type="journal"><string-name><surname>Armstrong</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Gosling</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Weinman</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Marteau</surname>, <given-names>T.</given-names></string-name> (<year>1997</year>). <article-title>The Place of Inter-Rater Reliability in Qualitative Research: An Empirical Study</article-title>. <source><italic>Sociology</italic></source>, <volume>31</volume>(<issue>3</issue>), <fpage>597</fpage>–<lpage>606</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/0038038597031003015">https://doi.org/10.1177/0038038597031003015</ext-link></mixed-citation></ref>
      <ref id="bib.bibx4"><mixed-citation publication-type="journal"><string-name><surname>Bakhshi</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Gilbert</surname>, <given-names>E.</given-names></string-name> (<year>2015</year>). <article-title>Red, purple and pink: The colors of diffusion on Pinterest</article-title>. <source><italic>PLOS ONE</italic></source>, <volume>10</volume>(<issue>2</issue>), <fpage>117</fpage>–<lpage>148</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1371/journal.pone.0117148">https://doi.org/10.1371/journal.pone.0117148</ext-link></mixed-citation></ref>
      <ref id="bib.bibx5"><mixed-citation publication-type="journal"><string-name><surname>Barr</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Herfroy-Mischler</surname>, <given-names>A.</given-names></string-name> (<year>2017</year>). <article-title>ISIL’s Execution Videos: Audience Segmentation and Terrorist Communication in the Digital Age</article-title>. <source><italic>Studies in Conflict and Terrorism</italic></source>,  <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/1057610X.2017.1361282">https://doi.org/10.1080/1057610X.2017.1361282</ext-link></mixed-citation></ref>
      <ref id="bib.bibx6"><mixed-citation publication-type="book"><string-name><surname>Barthes</surname>, <given-names>R.</given-names></string-name> (<year>1981</year>). <source><italic>Camera lucida: Reflections on photography</italic></source>. <publisher-name>Hill</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx7"><mixed-citation publication-type="journal"><string-name><surname>Braun</surname>, <given-names>V.</given-names></string-name>, &amp; <string-name><surname>Clarke</surname>, <given-names>V.</given-names></string-name> (<year>2006</year>). <article-title>Using thematic analysis in psychology</article-title>. <source><italic>Qualitative Research in Psychology</italic></source>, <volume>3</volume>(<issue>2</issue>), <fpage>77</fpage>–<lpage>101</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1191/1478088706qp063oa">https://doi.org/10.1191/1478088706qp063oa</ext-link></mixed-citation></ref>
      <ref id="bib.bibx8"><mixed-citation publication-type="book"><string-name><surname>Bridle</surname>, <given-names>J.</given-names></string-name> (<year>2023</year>). <source><italic>New dark age: Technology and the end of the future</italic></source> (<edition>Updated</edition>). </mixed-citation></ref>
      <ref id="bib.bibx9"><mixed-citation publication-type="book"><string-name><surname>Broz</surname>, <given-names>M.</given-names></string-name> (<year>2023</year>). <source><italic>How many pictures are there (2024): Statistics, trends, and forecasts</italic></source>.  <ext-link ext-link-type="uri" xlink:href="https://photutorial.com/how-many-photos/">https://photutorial.com/how-many-photos/</ext-link></mixed-citation></ref>
      <ref id="bib.bibx10"><mixed-citation publication-type="journal"><string-name><surname>Burscher</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Odijk</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Vliegenthart</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Rijke</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Vreese</surname>, <given-names>C.</given-names></string-name> (<year>2014</year>). <article-title>Teaching the Computer to Code Frames in News: Comparing Two Supervised Machine Learning Approaches to Frame Analysis</article-title>. <source><italic>Communication Methods and Measures</italic></source>, <volume>8</volume>(<issue>3</issue>), <fpage>190</fpage>–<lpage>206</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/19312458.2014.937527">https://doi.org/10.1080/19312458.2014.937527</ext-link></mixed-citation></ref>
      <ref id="bib.bibx11"><mixed-citation publication-type="journal"><string-name><surname>Cacciatore</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Scheufele</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Iyengar</surname>, <given-names>S.</given-names></string-name> (<year>2016</year>). <article-title>The End of Framing as we Know it ... and the Future of Media Effects</article-title>. <source><italic>Mass Communication and Society</italic></source>, <volume>19</volume>(<issue>1</issue>), <fpage>7</fpage>–<lpage>23</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/15205436.2015.1068811">https://doi.org/10.1080/15205436.2015.1068811</ext-link></mixed-citation></ref>
      <ref id="bib.bibx12"><mixed-citation publication-type="journal"><string-name><surname>Carlin</surname>, <given-names>M.</given-names></string-name> (<year>2012</year>). <article-title>Guns, gold and corporeal inscriptions</article-title>. <source><italic>Third Text</italic></source>, <volume>26</volume>(<issue>5</issue>), <fpage>503</fpage>–<lpage>514</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/09528822.2012.712763">https://doi.org/10.1080/09528822.2012.712763</ext-link></mixed-citation></ref>
      <ref id="bib.bibx13"><mixed-citation publication-type="book"><string-name><surname>Ceci</surname>, <given-names>L.</given-names></string-name> (<year>2024</year>). <source><italic>Hours of video uploaded to YouTube every minute 2007–2022</italic></source>.  <ext-link ext-link-type="uri" xlink:href="https://www.statista.com/statistics/259477/hours-of-video-uploaded-to-youtube-every-minute/">https://www.statista.com/statistics/259477/hours-of-video-uploaded-to-youtube-every-minute/</ext-link></mixed-citation></ref>
      <ref id="bib.bibx14"><mixed-citation publication-type="book"><string-name><surname>Center</surname>, <given-names>I.</given-names></string-name> (<year>2005</year>). <source><italic>Evolution of jihadi video</italic></source>.  <ext-link ext-link-type="uri" xlink:href="https://intelcenter.com/EJV-PUB-v1-0.pdf">https://intelcenter.com/EJV-PUB-v1-0.pdf</ext-link></mixed-citation></ref>
      <ref id="bib.bibx15"><mixed-citation publication-type="journal"><string-name><surname>Chan</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Bajjalieh</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Auvil</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Wessler</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Althaus</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Welbers</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Atteveldt</surname>, <given-names>W.</given-names></string-name>, &amp; <string-name><surname>Jungblut</surname>, <given-names>M.</given-names></string-name> (<year>2021</year>). <article-title>Four best practices for measuring news sentiment using ‘off-the-shelf’ dictionaries: A large-scale p-hacking experiment</article-title>. <source><italic>Computational Communication Research</italic></source>, <volume>3</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>27</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.5117/CCR2021.1.001.CHAN">https://doi.org/10.5117/CCR2021.1.001.CHAN</ext-link></mixed-citation></ref>
      <ref id="bib.bibx16"><mixed-citation publication-type="journal"><string-name><surname>Chen</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Duan</surname>, <given-names>Z.</given-names></string-name>, &amp; <string-name><surname>Kim</surname>, <given-names>S.</given-names></string-name> (<year>2024</year>). <article-title>Uncovering gender stereotypes in controversial science discourse: Evidence from computational text and visual analyses across digital platforms</article-title>. <source><italic>Journal of Computer-Mediated Communication</italic></source>, <volume>29</volume>(<issue>1</issue>), <fpage>052</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1093/jcmc/zmad052">https://doi.org/10.1093/jcmc/zmad052</ext-link></mixed-citation></ref>
      <ref id="bib.bibx17"><mixed-citation publication-type="journal"><string-name><surname>Chen</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Kim</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Gao</surname>, <given-names>Q.</given-names></string-name>, &amp; <string-name><surname>Raschka</surname>, <given-names>S.</given-names></string-name> (<year>2022</year>). <article-title>Visual framing of science conspiracy videos</article-title>. <source><italic>Computational Communication Research</italic></source>, <volume>4</volume>(<issue>1</issue>), <fpage>98</fpage>–<lpage>134</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.5117/ccr2022.1.003.chen">https://doi.org/10.5117/ccr2022.1.003.chen</ext-link></mixed-citation></ref>
      <ref id="bib.bibx18"><mixed-citation publication-type="journal"><string-name><surname>Chen</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Drouhard</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Kocielnik</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Suh</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Aragon</surname>, <given-names>C.</given-names></string-name> (<year>2018</year>). <article-title>Using Machine Learning to Support Qualitative Coding in Social Science: Shifting the Focus to Ambiguity</article-title>. <source><italic>ACM Trans. Interact. Intell. Syst</italic></source>, <volume>8</volume>(<issue>2</issue>), <fpage>1</fpage>–<lpage>9 20</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/3185515">https://doi.org/10.1145/3185515</ext-link></mixed-citation></ref>
      <ref id="bib.bibx19"><mixed-citation publication-type="journal"><string-name><surname>Chen</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Zhai</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name><surname>Sun</surname>, <given-names>S.</given-names></string-name> (<year>2024</year>). <article-title>The gendered lens of AI: examining news imagery across digital spaces</article-title>. <source><italic>Journal of Computer-Mediated Communication</italic></source>, <volume>29</volume>(<issue>1</issue>), </mixed-citation></ref>
      <ref id="bib.bibx20"><mixed-citation publication-type="journal"><string-name><surname>Chouliaraki</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Kissas</surname>, <given-names>A.</given-names></string-name> (<year>2018</year>). <article-title>The communication of horrorism: a typology of ISIS online death videos</article-title>. <source><italic>Critical Studies in Media Communication</italic></source>, <volume>35</volume>(<issue>1</issue>), <fpage>24</fpage>–<lpage>39</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/15295036.2017.1393096">https://doi.org/10.1080/15295036.2017.1393096</ext-link></mixed-citation></ref>
      <ref id="bib.bibx21"><mixed-citation publication-type="journal"><string-name><surname>Ciovacco</surname>, <given-names>C.</given-names></string-name> (<year>2009</year>). <article-title>The contours of Al Qaeda’s media strategy</article-title>. <source><italic>Studies in Conflict and Terrorism</italic></source>, <volume>32</volume>(<issue>10</issue>), <fpage>853</fpage>–<lpage>875</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/10576100903182377">https://doi.org/10.1080/10576100903182377</ext-link></mixed-citation></ref>
      <ref id="bib.bibx22"><mixed-citation publication-type="book"><string-name><surname>Coleman</surname>, <given-names>A.</given-names></string-name> (<year>2006</year>). <source><italic>The islamic imagery project. Visual motifs in jihadi the islamic imagery</italic></source>.  <ext-link ext-link-type="uri" xlink:href="https://www.ctc.usma.edu/v2/wp-content/uploads/2010/06/Islamic-Imagery-Project.pdf">https://www.ctc.usma.edu/v2/wp-content/uploads/2010/06/Islamic-Imagery-Project.pdf</ext-link></mixed-citation></ref>
      <ref id="bib.bibx23"><mixed-citation publication-type="book"><string-name><surname>Coleman</surname>, <given-names>R.</given-names></string-name> (<year>2010</year>). <chapter-title>Framing the Pictures in Our Heads: Exploring the Framing and Agenda-Setting Effects of Visual Images</chapter-title>. In <source><italic>Doing News Framing Analysis</italic></source> (pp. <fpage>233</fpage>–<lpage>261</lpage>). </mixed-citation></ref>
      <ref id="bib.bibx24"><mixed-citation publication-type="book"><string-name><surname>Damanhoury</surname>, <given-names>K.</given-names></string-name> (<year>2022</year>). <source><italic>Photographic Warfare: ISIS, Egypt, and the Online Battle for Sinai</italic></source>. <publisher-name>University of Georgia Press</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx25"><mixed-citation publication-type="journal"><string-name><surname>Damanhoury</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name> (<year>2018</year>). <article-title>Picturing Law and Order: A Visual Framing Analysis of ISIS’s Dabiq Magazine</article-title>. <source><italic>Arab Media &amp; Society</italic></source>, <volume>Winter/Spr(25</volume>, <fpage>1</fpage>–<lpage>25</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.70090/kdcw18vf">https://doi.org/10.70090/kdcw18vf</ext-link></mixed-citation></ref>
      <ref id="bib.bibx26"><mixed-citation publication-type="journal"><string-name><surname>Dietrich</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name><surname>Ko</surname>, <given-names>H.</given-names></string-name> (<year>2022</year>). <article-title>Finding fauci</article-title>. <source><italic>Computational Communication Research</italic></source>, <volume>4</volume>(<issue>1</issue>), <fpage>135</fpage>–<lpage>172</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.5117/ccr2022.1.004.diet">https://doi.org/10.5117/ccr2022.1.004.diet</ext-link></mixed-citation></ref>
      <ref id="bib.bibx27"><mixed-citation publication-type="book"><string-name><surname>Dijk</surname>, <given-names>T.</given-names></string-name> (<year>1998</year>). <source><italic>Ideology: A multidisciplinary approach</italic></source>. <publisher-name>Sage Publications</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx28"><mixed-citation publication-type="journal"><string-name><surname>Dondero</surname>, <given-names>M.</given-names></string-name> (<year>2019</year>). <article-title>Visual Semiotics and Automatic Analysis of Images From the Cultural Analytics Lab: How Can Quantitative and Qualitative Analysis Be Combined?</article-title>. <source><italic>Semiotica</italic></source>, <volume>2019</volume>(<issue>230</issue>), <fpage>121</fpage>–<lpage>142</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1515/sem-2018-0104">https://doi.org/10.1515/sem-2018-0104</ext-link></mixed-citation></ref>
      <ref id="bib.bibx29"><mixed-citation publication-type="book"><string-name><surname>Duarte</surname>, <given-names>F.</given-names></string-name> (<year>2024</year>). <source><italic>Amount of data created daily</italic></source>.  <ext-link ext-link-type="uri" xlink:href="https://explodingtopics.com/blog/data-generated-per-day">https://explodingtopics.com/blog/data-generated-per-day</ext-link></mixed-citation></ref>
      <ref id="bib.bibx30"><mixed-citation publication-type="journal"><string-name><surname>Edwards</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name> (<year>1997</year>). <article-title>Representative form and the visual ideograph: The Iwo Jima image in editorial cartoons</article-title>. <source><italic>Quarterly Journal of Speech</italic></source>, <volume>83</volume>(<issue>3</issue>), <fpage>289</fpage>–<lpage>310</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/00335639709384187">https://doi.org/10.1080/00335639709384187</ext-link></mixed-citation></ref>
      <ref id="bib.bibx31"><mixed-citation publication-type="journal"><string-name><surname>El Karhili</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Hendry</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Kackowski</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>El Damanhoury</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Dicker</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name> (<year>2021</year>). <article-title>Islamic/State: Daesh’s Visual Negotiation of Institutional Positioning</article-title>. <source><italic>Journal of Media and Religion</italic></source>, <volume>20</volume>(<issue>2</issue>), <fpage>79</fpage>–<lpage>104</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/15348423.2021.1930813">https://doi.org/10.1080/15348423.2021.1930813</ext-link></mixed-citation></ref>
      <ref id="bib.bibx32"><mixed-citation publication-type="journal"><string-name><surname>Fabiszak</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Buchstaller</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Brzezińska</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Alvanides</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Griese</surname>, <given-names>F.</given-names></string-name>, &amp; <string-name><surname>Schneider</surname>, <given-names>C.</given-names></string-name> (<year>2021</year>). <article-title>Ideology in the linguistic landscape: Towards a quantitative approach</article-title>. <source><italic>Discourse &amp; Society</italic></source>, <volume>32</volume>(<issue>4</issue>), <fpage>405</fpage>–<lpage>425</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/0957926521992149">https://doi.org/10.1177/0957926521992149</ext-link></mixed-citation></ref>
      <ref id="bib.bibx33"><mixed-citation publication-type="journal"><string-name><surname>Farwell</surname>, <given-names>J.</given-names></string-name> (<year>2010</year>). <article-title>Jihadi Video in the ‘War of Ideas</article-title>. <source><italic>Survival</italic></source>, <volume>52</volume>(<issue>6</issue>), <fpage>127</fpage>–<lpage>150</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/00396338.2010.540787">https://doi.org/10.1080/00396338.2010.540787</ext-link></mixed-citation></ref>
      <ref id="bib.bibx34"><mixed-citation publication-type="journal"><string-name><surname>Feng</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>O’Halloran</surname>, <given-names>K.</given-names></string-name> (<year>2013</year>). <article-title>The visual representation of metaphor: A social semiotic approach</article-title>. <source><italic>Review of Cognitive Linguistics</italic></source>, <volume>11</volume>(<issue>2</issue>), <fpage>320</fpage>–<lpage>335</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1075/rcl.11.2.07">https://doi.org/10.1075/rcl.11.2.07</ext-link></mixed-citation></ref>
      <ref id="bib.bibx35"><mixed-citation publication-type="journal"><string-name><surname>Feng</surname>, <given-names>W.</given-names></string-name> (<year>2017</year>). <article-title>Metonymy and visual representation: Towards a social semiotic framework of visual metonymy</article-title>. <source><italic>Visual Communication</italic></source>, <volume>16</volume>(<issue>4</issue>), <fpage>441</fpage>–<lpage>466</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/1470357217717142">https://doi.org/10.1177/1470357217717142</ext-link></mixed-citation></ref>
      <ref id="bib.bibx36"><mixed-citation publication-type="journal"><string-name><surname>Forgas</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>East</surname>, <given-names>R.</given-names></string-name> (<year>2008</year>). <article-title>How real is that smile? Mood effects on accepting or rejecting the veracity of emotional facial expressions</article-title>. <source><italic>Journal of Nonverbal Behavior</italic></source>, <volume>32</volume>(<issue>3</issue>), <fpage>157</fpage>–<lpage>170</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx37"><mixed-citation publication-type="journal"><string-name><surname>Gamieldien</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Case</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Katz</surname>, <given-names>A.</given-names></string-name> (<year>2023</year>). <article-title>Advancing Qualitative Analysis</article-title>. <source><italic>An Exploration of the Potential of Generative AI and NLP in Thematic Coding (SSRN Scholarly Paper</italic></source>,  <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.2139/ssrn.4487768">https://doi.org/10.2139/ssrn.4487768</ext-link></mixed-citation></ref>
      <ref id="bib.bibx38"><mixed-citation publication-type="journal"><string-name><surname>Geise</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Baden</surname>, <given-names>C.</given-names></string-name> (<year>2015</year>). <article-title>Putting the Image Back Into the Frame: Modeling the Linkage Between Visual Communication and Frame-Processing Theory</article-title>. <source><italic>Communication Theory</italic></source>, <volume>25</volume>(<issue>1</issue>), <fpage>46</fpage>–<lpage>69</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1111/comt.12048">https://doi.org/10.1111/comt.12048</ext-link></mixed-citation></ref>
      <ref id="bib.bibx39"><mixed-citation publication-type="journal"><string-name><surname>Glausch</surname>, <given-names>M.</given-names></string-name> (<year>2020</year>). <article-title>Infographics and their role in the IS propaganda machine</article-title>. <source><italic>Contemporary Voices: St Andrews Journal of International Relations</italic></source>, <volume>1</volume>(<issue>3</issue>), <fpage>32</fpage>–<lpage>50</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.15664/jtr.1492">https://doi.org/10.15664/jtr.1492</ext-link></mixed-citation></ref>
      <ref id="bib.bibx40"><mixed-citation publication-type="book"><string-name><surname>Goffman</surname>, <given-names>E.</given-names></string-name> (<year>1974</year>). <source><italic>Frame analysis: An essay on the organization of experience</italic></source>. <publisher-name>Northeastern University Press</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx41"><mixed-citation publication-type="book"><string-name><surname>Goffman</surname>, <given-names>E.</given-names></string-name> (<year>1979</year>). <source><italic>Gender and advertisements</italic></source>. <publisher-name>Macmillan</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx42"><mixed-citation publication-type="book"><string-name><surname>Green</surname>, <given-names>L.</given-names></string-name> (<year>2015</year>). <source><italic>Advertising war: Pictorial publicity, 1914-1918</italic></source>. <publisher-name>Doctoral Dissertation, Manchester Metropolitan University</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx43"><mixed-citation publication-type="journal"><string-name><surname>Gregg</surname>, <given-names>H.</given-names></string-name> (<year>2023</year>). <article-title>The Islamic State in Africa: The Emergence, Evolution, and Future of the Next Jihadist Battlefront</article-title>. <source><italic>Parameters</italic></source>, <volume>53</volume>(<issue>3</issue>), <fpage>159</fpage>–<lpage>161</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/10220461.2023.2201586">https://doi.org/10.1080/10220461.2023.2201586</ext-link></mixed-citation></ref>
      <ref id="bib.bibx44"><mixed-citation publication-type="journal"><string-name><surname>Ha</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Park</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Kim</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Joo</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Cha</surname>, <given-names>M.</given-names></string-name> (<year>2020</year>). <article-title>Automatically Detecting Image–Text Mismatch on Instagram with Deep Learning</article-title>. <source><italic>Journal of Advertising</italic></source>, <volume>50</volume>(<issue>1</issue>), <fpage>52</fpage>–<lpage>62</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/00913367.2020.1843091">https://doi.org/10.1080/00913367.2020.1843091</ext-link></mixed-citation></ref>
      <ref id="bib.bibx45"><mixed-citation publication-type="book"><string-name><surname>Hall</surname>, <given-names>E.</given-names></string-name> (<year>1966</year>). <source><italic>The hidden dimension</italic></source>. <publisher-name>Doubleday</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx46"><mixed-citation publication-type="journal"><string-name><surname>Hanusch</surname>, <given-names>F.</given-names></string-name> (<year>2013</year>). <article-title>Sensationalizing death? Graphic disaster images in the tabloid and broadsheet press</article-title>. <source><italic>European Journal of Communication</italic></source>, <volume>28</volume>(<issue>5</issue>), <fpage>497</fpage>–<lpage>513</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/0267323113491349">https://doi.org/10.1177/0267323113491349</ext-link></mixed-citation></ref>
      <ref id="bib.bibx47"><mixed-citation publication-type="book"><string-name><surname>Hariman</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name><surname>Lucaites</surname>, <given-names>J.</given-names></string-name> (<year>2007</year>). <source><italic>No caption needed: Iconic photographs, public culture, and liberal democracy</italic></source>. <publisher-name>University of Chicago Press</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx48"><mixed-citation publication-type="journal"><string-name><surname>Heck</surname>, <given-names>A.</given-names></string-name> (<year>2017</year>). <article-title>Images, visions and narrative identity formation of ISIS</article-title>. <source><italic>Global Discourse</italic></source>, <volume>7</volume>(<issue>2-3</issue>), <fpage>244</fpage>–<lpage>259</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/23269995.2017.1342490">https://doi.org/10.1080/23269995.2017.1342490</ext-link></mixed-citation></ref>
      <ref id="bib.bibx49"><mixed-citation publication-type="journal"><string-name><surname>Höijer</surname>, <given-names>B.</given-names></string-name> (<year>2004</year>). <article-title>The discourse of global compassion: The audience and media reporting of global suffering</article-title>. <source><italic>Media, Culture &amp; Society</italic></source>, <volume>26</volume>(<issue>4</issue>), <fpage>513</fpage>–<lpage>531</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/0163443704044215">https://doi.org/10.1177/0163443704044215</ext-link></mixed-citation></ref>
      <ref id="bib.bibx50"><mixed-citation publication-type="journal"><string-name><surname>Impara</surname>, <given-names>E.</given-names></string-name> (<year>2018</year>). <article-title>A social semiotics analysis of Islamic State’s use of beheadings: Images of power, masculinity, spectacle and propaganda</article-title>. <source><italic>International Journal of Law, Crime and Justice</italic></source>, <volume>53</volume>, <fpage>25</fpage>–<lpage>45</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.ijlcj.2018.02.002">https://doi.org/10.1016/j.ijlcj.2018.02.002</ext-link></mixed-citation></ref>
      <ref id="bib.bibx51"><mixed-citation publication-type="collab"><collab>JasenkaG</collab>. (<year>2023</year>). <article-title>How many pictures? Photo statistics that will blow you away</article-title>.  <ext-link ext-link-type="uri" xlink:href="https://www.lightstalking.com/photo-statistics/">https://www.lightstalking.com/photo-statistics/</ext-link></mixed-citation></ref>
      <ref id="bib.bibx52"><mixed-citation publication-type="book"><string-name><surname>Jewitt</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Oyama</surname>, <given-names>R.</given-names></string-name> (<year>2008</year>). <chapter-title>Visual Meaning: a Social Semiotic Approach</chapter-title>. In <source><italic>Handbook of Visual Analysis</italic></source> (pp. <fpage>134</fpage>–<lpage>156</lpage>). <publisher-name>Sage Publications Ltd</publisher-name>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1017/CBO9781107415324.004">https://doi.org/10.1017/CBO9781107415324.004</ext-link></mixed-citation></ref>
      <ref id="bib.bibx53"><mixed-citation publication-type="journal"><string-name><surname>Joo</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Steinert-Threlkeld</surname>, <given-names>Z.</given-names></string-name> (<year>2022</year>). <article-title>Image as data: Automated content analysis for visual presentations of political actors and events</article-title>. <source><italic>Computational Communication Research</italic></source>, <volume>4</volume>(<issue>1</issue>), <fpage>11</fpage>–<lpage>67</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.5117/CCR2022.1.001.JOO">https://doi.org/10.5117/CCR2022.1.001.JOO</ext-link></mixed-citation></ref>
      <ref id="bib.bibx54"><mixed-citation publication-type="journal"><string-name><surname>Kaczkowski</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>El Damanhoury</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Luu</surname>, <given-names>Y.</given-names></string-name> (<year>2021</year>). <article-title>Intersections of the Real and the Virtual Caliphates: The Islamic State’s Territory and Media Campaign</article-title>. <source><italic>Journal of Global Security Studies</italic></source>, <volume>6</volume>(<issue>2</issue>),  <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1093/jogss/ogaa020">https://doi.org/10.1093/jogss/ogaa020</ext-link></mixed-citation></ref>
      <ref id="bib.bibx55"><mixed-citation publication-type="journal"><string-name><surname>Knobloch</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Hastall</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Zillmann</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Callison</surname>, <given-names>C.</given-names></string-name> (<year>2003</year>). <article-title>Imagery effects on the selective reading of internet newsmagazines</article-title>. <source><italic>Communication Research</italic></source>, <volume>30</volume>(<issue>1</issue>), <fpage>3</fpage>–<lpage>29</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/0093650202239023">https://doi.org/10.1177/0093650202239023</ext-link></mixed-citation></ref>
      <ref id="bib.bibx56"><mixed-citation publication-type="journal"><string-name><surname>Kraft</surname>, <given-names>R.</given-names></string-name> (<year>1986</year>). <article-title>The influence of camera angle on comprehension and retention of pictorial events</article-title>. <source><italic>Memory &amp; Cognition</italic></source>, <volume>15</volume>(<issue>4</issue>), <fpage>291</fpage>–<lpage>307</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3758/BF03197032">https://doi.org/10.3758/BF03197032</ext-link></mixed-citation></ref>
      <ref id="bib.bibx57"><mixed-citation publication-type="book"><string-name><surname>Kress</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name><surname>Leeuwen</surname>, <given-names>T.</given-names></string-name> (<year>2006</year>). <source><italic>Reading images: The grammar of visual design</italic></source> (<edition>2nd</edition>). <publisher-name>Routledge</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx58"><mixed-citation publication-type="book"><string-name><surname>Kress</surname>, <given-names>G.</given-names></string-name>, &amp; <string-name><surname>Leeuwen</surname>, <given-names>T.</given-names></string-name> (<year>1996</year>). <source><italic>Reading Images: The Grammar of Visual Design</italic></source> (<edition>2nd</edition>). <publisher-name>Routledge</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx59"><mixed-citation publication-type="book"><string-name><surname>Krippendorff</surname>, <given-names>K.</given-names></string-name> (<year>2018</year>). <source><italic>Content analysis: An introduction to its methodology</italic></source> (<edition>Fourth</edition>). <publisher-name>SAGE</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx60"><mixed-citation publication-type="book"><string-name><surname>Kuznar</surname>, <given-names>L.</given-names></string-name> (<year>2015</year>). <source><italic>Daesh’s image of the state in their own words</italic></source>. </mixed-citation></ref>
      <ref id="bib.bibx61"><mixed-citation publication-type="journal"><string-name><surname>Lakretz</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Dehaene</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>King</surname>, <given-names>JR.</given-names></string-name> (<year>2020</year>). <article-title>What Limits Our Capacity to Process Nested Long-Range Dependencies in Sentence Comprehension?</article-title>. <source><italic>Entropy</italic></source>, <volume>22</volume>(<issue>4</issue>), <fpage>446</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3390/e22040446">https://doi.org/10.3390/e22040446</ext-link></mixed-citation></ref>
      <ref id="bib.bibx62"><mixed-citation publication-type="book"><string-name><surname>Leeuwen</surname>, <given-names>T.</given-names></string-name> (<year>2001</year>). <chapter-title>Semiotics and iconography</chapter-title>. In <source><italic>The handbook of visual analysis</italic></source> (pp. <fpage>92</fpage>–<lpage>118</lpage>). </mixed-citation></ref>
      <ref id="bib.bibx63"><mixed-citation publication-type="book"><string-name><surname>Linell</surname>, <given-names>P.</given-names></string-name> (<year>1998</year>). <source><italic>Approaching Dialogue: Talk, interaction and contexts in dialogical perspectives</italic></source>. <publisher-name>John Benjamins Publishing Company</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx64"><mixed-citation publication-type="journal"><string-name><surname>Lokmanoglu</surname>, <given-names>A.</given-names></string-name> (<year>2020</year>). <article-title>Coin as Imagined Sovereignty: A Rhetorical Analysis of Coins as a Transhistorical Artifact and an Ideograph in Islamic State’s Communication</article-title>. <source><italic>Studies in Conflict &amp; Terrorism</italic></source>, <volume>44</volume>(<issue>1</issue>), <fpage>52</fpage>–<lpage>73</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx65"><mixed-citation publication-type="journal"><string-name><surname>Lokmanoglu</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Mcminimy</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Almahmoud</surname>, <given-names>M.</given-names></string-name> (<year>2022</year>). <article-title>Troop Withdrawal Announcements, ISIS Media: Visualizing Community and Resilience</article-title>. <source><italic>International Journal of Communication</italic></source>, <volume>16</volume>, <fpage>215</fpage>–<lpage>246</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx66"><mixed-citation publication-type="book"><string-name><surname>Lokmanoglu</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Allaham</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Abhari</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Mortenson</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Villa Turek</surname>, <given-names>E.</given-names></string-name> (<year>2023</year>). <source><italic>A Picture is Worth a Thousand (S)words: Classification and Diffusion of Memes on a Partisan Media Platform</italic></source>.  <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.18742/pub01">https://doi.org/10.18742/pub01</ext-link></mixed-citation></ref>
      <ref id="bib.bibx67"><mixed-citation publication-type="journal"><string-name><surname>Lokmanoglu</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Walter</surname>, <given-names>D.</given-names></string-name> (<year>2025</year>). <article-title>Topic modeling of video and image data: A visual semantic unsupervised approach</article-title>. <source><italic>Communication Methods and Measures</italic></source>,  <ext-link ext-link-type="uri" xlink:href="https://www.tandfonline.com/doi/abs/10.1080/19312458.2025.2549707">https://www.tandfonline.com/doi/abs/10.1080/19312458.2025.2549707</ext-link></mixed-citation></ref>
      <ref id="bib.bibx68"><mixed-citation publication-type="journal"><string-name><surname>Mackieson</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Shlonsky</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Connolly</surname>, <given-names>M.</given-names></string-name> (<year>2019</year>). <article-title>Increasing rigor and reducing bias in qualitative research: A document analysis of parliamentary debates using applied thematic analysis</article-title>. <source><italic>Qualitative Social Work</italic></source>, <volume>18</volume>(<issue>6</issue>), <fpage>965</fpage>–<lpage>980</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/1473325018786996">https://doi.org/10.1177/1473325018786996</ext-link></mixed-citation></ref>
      <ref id="bib.bibx69"><mixed-citation publication-type="journal"><string-name><surname>Marathe</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Toyama</surname>, <given-names>K.</given-names></string-name> (<year>2018</year>). <article-title>Semi-Automated Coding for Qualitative Research: A User-Centered Inquiry and Initial Prototypes</article-title>. <source><italic>Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems</italic></source>, <fpage>1</fpage>–<lpage>12</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/3173574.3173922">https://doi.org/10.1145/3173574.3173922</ext-link></mixed-citation></ref>
      <ref id="bib.bibx70"><mixed-citation publication-type="journal"><string-name><surname>Mattoni</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Teune</surname>, <given-names>S.</given-names></string-name> (<year>2014</year>). <article-title>Visions of protest. A media‐historic perspective on images in social movements</article-title>. <source><italic>Sociology Compass</italic></source>, <volume>8</volume>(<issue>6</issue>), <fpage>876</fpage>–<lpage>887</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx71"><mixed-citation publication-type="journal"><string-name><surname>McCain</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Chilberg</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Wakshlag</surname>, <given-names>J.</given-names></string-name> (<year>1977</year>). <article-title>The effect of camera angle on source credibility and attraction</article-title>. <source><italic>Journal of Broadcasting</italic></source>, <volume>21</volume>(<issue>1</issue>), <fpage>35</fpage>–<lpage>46</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/08838157709363815">https://doi.org/10.1080/08838157709363815</ext-link></mixed-citation></ref>
      <ref id="bib.bibx72"><mixed-citation publication-type="journal"><string-name><surname>McClancy</surname>, <given-names>K.</given-names></string-name> (<year>2013</year>). <article-title>The iconography of violence: Television, Vietnam, and the soldier hero</article-title>. <source><italic>Film &amp; History: An Interdisciplinary Journal</italic></source>, <volume>43</volume>(<issue>2</issue>), <fpage>50</fpage>–<lpage>66</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx73"><mixed-citation publication-type="journal"><string-name><surname>McGee</surname>, <given-names>M.</given-names></string-name> (<year>1980</year>). <article-title>The “ideograph”: A link between rhetoric and ideology</article-title>. <source><italic>Quarterly Journal of Speech</italic></source>, <volume>66</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>16</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/00335638009383499">https://doi.org/10.1080/00335638009383499</ext-link></mixed-citation></ref>
      <ref id="bib.bibx74"><mixed-citation publication-type="journal"><string-name><surname>McGee</surname>, <given-names>M.</given-names></string-name> (<year>1990</year>). <article-title>Text, Context, and the Fragmentation of Contemporary Culture</article-title>. <source><italic>Western Journal of Speech Communication: WJSC</italic></source>, <volume>54</volume>(<issue>3</issue>), <fpage>274</fpage>–<lpage>289</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx75"><mixed-citation publication-type="book"><string-name><surname>Mcgil</surname>, <given-names>J.</given-names></string-name> (<year>2024</year>).  <ext-link ext-link-type="uri" xlink:href="https://contentdetector.ai/articles/meme-statistics/">https://contentdetector.ai/articles/meme-statistics/</ext-link></mixed-citation></ref>
      <ref id="bib.bibx76"><mixed-citation publication-type="journal"><string-name><surname>McMinimy</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Lokmanoglu</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Almahmoud</surname>, <given-names>M.</given-names></string-name> (<year>2021</year>). <article-title>Censoring Extremism: Influence of Online Restriction on Official Media Products of ISIS</article-title>. <source><italic>Terrorism and Political Violence</italic></source>, <volume>00</volume>(<issue>00</issue>), <fpage>1</fpage>–<lpage>17</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/09546553.2021.1988938">https://doi.org/10.1080/09546553.2021.1988938</ext-link></mixed-citation></ref>
      <ref id="bib.bibx77"><mixed-citation publication-type="book"><string-name><surname>Messaris</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Abraham</surname>, <given-names>L.</given-names></string-name> (<year>2001</year>). <chapter-title>The role of images in framing news stories</chapter-title>. In <source><italic>Framing public life: Perspectives on media and our understanding of the social world</italic></source> (pp. <fpage>215</fpage>–<lpage>226</lpage>). <publisher-name>Lawrence Erlbaum Associates Publishers</publisher-name>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.4324/9781410605689">https://doi.org/10.4324/9781410605689</ext-link></mixed-citation></ref>
      <ref id="bib.bibx78"><mixed-citation publication-type="book"><string-name><surname>Mitchell</surname>, <given-names>W.</given-names></string-name> (<year>1995</year>). <source><italic>Picture Theory: Essays on Verbal and Visual Representation</italic></source>. <publisher-name>University of Chicago Press</publisher-name>. <ext-link ext-link-type="uri" xlink:href="https://press.uchicago.edu/ucp/books/book/chicago/P/bo3683962.html">https://press.uchicago.edu/ucp/books/book/chicago/P/bo3683962.html</ext-link></mixed-citation></ref>
      <ref id="bib.bibx79"><mixed-citation publication-type="journal"><string-name><surname>Morgan</surname>, <given-names>D.</given-names></string-name> (<year>2023</year>). <article-title>Exploring the Use of Artificial Intelligence for Qualitative Data Analysis: The Case of ChatGPT</article-title>. <source><italic>International Journal of Qualitative Methods</italic></source>, <volume>22</volume>, <fpage>16094069231211248</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/16094069231211248">https://doi.org/10.1177/16094069231211248</ext-link></mixed-citation></ref>
      <ref id="bib.bibx80"><mixed-citation publication-type="journal"><string-name><surname>Moriarty</surname>, <given-names>S.</given-names></string-name> (<year>2002</year>). <article-title>The symbiotics of semiotics and visual communication</article-title>. <source><italic>Journal of Visual Literacy</italic></source>, <volume>22</volume>(<issue>1</issue>), <fpage>19</fpage>–<lpage>28</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/23796529.2002.11674579">https://doi.org/10.1080/23796529.2002.11674579</ext-link></mixed-citation></ref>
      <ref id="bib.bibx81"><mixed-citation publication-type="journal"><string-name><surname>Muise</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Lu</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Pan</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Reeves</surname>, <given-names>B.</given-names></string-name> (<year>2022</year>). <article-title>Selectively localized: Temporal and visual structure of smartphone screen activity across media environments</article-title>. <source><italic>Mobile Media &amp; Communication</italic></source>, <volume>10</volume>(<issue>3</issue>), <fpage>487</fpage>–<lpage>509</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/20501579221080333">https://doi.org/10.1177/20501579221080333</ext-link></mixed-citation></ref>
      <ref id="bib.bibx82"><mixed-citation publication-type="collab"><collab>OpenAI</collab>. (<year>2024</year>). <article-title>ChatGPT-4o API</article-title>.  <ext-link ext-link-type="uri" xlink:href="https://platform.openai.com/">https://platform.openai.com/</ext-link></mixed-citation></ref>
      <ref id="bib.bibx83"><mixed-citation publication-type="journal"><string-name><surname>Paglen</surname>, <given-names>T.</given-names></string-name> (<year>2019</year>). <article-title>Invisible Images: Your Pictures Are Looking at You</article-title>. <source><italic>Architectural Design</italic></source>, <volume>89</volume>(<issue>1</issue>), <fpage>22</fpage>–<lpage>27</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1002/ad.2383">https://doi.org/10.1002/ad.2383</ext-link></mixed-citation></ref>
      <ref id="bib.bibx84"><mixed-citation publication-type="book"><string-name><surname>Panofsky</surname>, <given-names>E.</given-names></string-name> (<year>1955</year>). <source><italic>Meaning in the visual arts</italic></source>. <publisher-name>Doubleday Anchor Books</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx85"><mixed-citation publication-type="journal"><string-name><surname>Peng</surname>, <given-names>Y.</given-names></string-name> (<year>2018</year>). <article-title>Same candidates, different faces: Uncovering media bias in visual portrayals of presidential candidates with computer vision</article-title>. <source><italic>Journal of Communication</italic></source>, <volume>68</volume>(<issue>5</issue>), <fpage>920</fpage>–<lpage>941</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx86"><mixed-citation publication-type="journal"><string-name><surname>Peng</surname>, <given-names>Y.</given-names></string-name> (<year>2021</year>). <article-title>What Makes Politicians’ Instagram Posts Popular? Analyzing Social Media Strategies of Candidates and Office Holders with Computer Vision</article-title>. <source><italic>The International Journal of Press/Politics</italic></source>, <volume>26</volume>(<issue>1</issue>), <fpage>143</fpage>–<lpage>166</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/1940161220964769">https://doi.org/10.1177/1940161220964769</ext-link></mixed-citation></ref>
      <ref id="bib.bibx87"><mixed-citation publication-type="journal"><string-name><surname>Peng</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Lock</surname>, <given-names>I.</given-names></string-name>, &amp; <string-name><surname>Ali Salah</surname>, <given-names>A.</given-names></string-name> (<year>2024</year>). <article-title>Automated visual analysis for the study of social media effects: Opportunities, approaches, and challenges</article-title>. <source><italic>Communication Methods and Measures</italic></source>, <volume>18</volume>(<issue>2</issue>), <fpage>163</fpage>–<lpage>185</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx88"><mixed-citation publication-type="book"><string-name><surname>Peng</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name><surname>Yingdan</surname>, <given-names>L.</given-names></string-name> (<year>2023</year>). <chapter-title>Computational visual analysis in political communication</chapter-title>. In <source><italic>Research handbook in visual politics</italic></source> (pp. <fpage>41</fpage>–<lpage>53</lpage>). <publisher-name>Edward Elgar</publisher-name>. <ext-link ext-link-type="uri" xlink:href="https://ssrn.com/abstract=4577025">https://ssrn.com/abstract=4577025</ext-link></mixed-citation></ref>
      <ref id="bib.bibx89"><mixed-citation publication-type="book"><string-name><surname>Perlmutter</surname>, <given-names>D.</given-names></string-name> (<year>1998</year>). <source><italic>Photojournalism and foreign policy: Icons of outrage in international crises</italic></source>. <publisher-name>Praeger</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx90"><mixed-citation publication-type="journal"><string-name><surname>Rietz</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Maedche</surname>, <given-names>A.</given-names></string-name> (<year>2021</year>). <article-title>Cody: An AI-Based System to Semi-Automate Coding for Qualitative Research</article-title>. <source><italic>Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems</italic></source>, <fpage>1</fpage>–<lpage>14</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/3411764.3445591">https://doi.org/10.1145/3411764.3445591</ext-link></mixed-citation></ref>
      <ref id="bib.bibx91"><mixed-citation publication-type="journal"><string-name><surname>Rodriguez</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Dimitrova</surname>, <given-names>D.</given-names></string-name> (<year>2011</year>). <article-title>The levels of visual framing</article-title>. <source><italic>Journal of Visual Literacy</italic></source>, <volume>30</volume>(<issue>1</issue>), <fpage>48</fpage>–<lpage>65</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/23796529.2011.11674684">https://doi.org/10.1080/23796529.2011.11674684</ext-link></mixed-citation></ref>
      <ref id="bib.bibx92"><mixed-citation publication-type="book"><string-name><surname>Saldana</surname>, <given-names>J.</given-names></string-name> (<year>2021</year>). <source><italic>The Coding Manual for Qualitative Researchers</italic></source>. <publisher-name>SAGE Publications Ltd</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx93"><mixed-citation publication-type="book"><string-name><surname>Saltman</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Smith</surname>, <given-names>M.</given-names></string-name> (<year>2015</year>). <source><italic>Till Martyrdom Do Us Part - Gender and the ISIS</italic></source>.  <ext-link ext-link-type="uri" xlink:href="http://www.strategicdialogue.org/Till_Martyrdom_Do_Us_Part_Gender_and_the_ISIS_Phenomenon.pdf">http://www.strategicdialogue.org/Till_Martyrdom_Do_Us_Part_Gender_and_the_ISIS_Phenomenon.pdf</ext-link></mixed-citation></ref>
      <ref id="bib.bibx94"><mixed-citation publication-type="journal"><string-name><surname>Schwalbe</surname>, <given-names>C.</given-names></string-name> (<year>2006</year>). <article-title>Remembering Our Shared Past: Visually Framing the Iraq War on U.S</article-title>. <source><italic>News Websites. Journal of Computer-Mediated Communication</italic></source>, <volume>12</volume>(<issue>1</issue>), <fpage>264</fpage>–<lpage>289</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1111/j.1083-6101.2006.00325.x">https://doi.org/10.1111/j.1083-6101.2006.00325.x</ext-link></mixed-citation></ref>
      <ref id="bib.bibx95"><mixed-citation publication-type="journal"><string-name><surname>Sezen</surname>, <given-names>D.</given-names></string-name> (<year>2020</year>). <article-title>Without a blink: Machine ways of seeing in contemporary visual culture</article-title>. <source><italic>Interactions: Studies in Communication &amp; Culture</italic></source>, <volume>11</volume>(<issue>1</issue>), <fpage>103</fpage>–<lpage>107</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1386/iscc_00010_7">https://doi.org/10.1386/iscc_00010_7</ext-link></mixed-citation></ref>
      <ref id="bib.bibx96"><mixed-citation publication-type="journal"><string-name><surname>Song</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Tolochko</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Eberl</surname>, <given-names>JM.</given-names></string-name>, <string-name><surname>Eisele</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Greussing</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Heidenreich</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Lind</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Galyga</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Boomgaarden</surname>, <given-names>H.</given-names></string-name> (<year>2020</year>). <article-title>In Validations We Trust? The Impact of Imperfect Human Annotations as a Gold Standard on the Quality of Validation of Automated Content Analysis</article-title>. <source><italic>Political Communication</italic></source>, <volume>37</volume>(<issue>4</issue>), <fpage>550</fpage>–<lpage>572</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/10584609.2020.1723752">https://doi.org/10.1080/10584609.2020.1723752</ext-link></mixed-citation></ref>
      <ref id="bib.bibx97"><mixed-citation publication-type="journal"><string-name><surname>Steinert-Threlkeld</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Chan</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Joo</surname>, <given-names>J.</given-names></string-name> (<year>2022</year>). <article-title>How State and Protester Violence Affect Protest Dynamics</article-title>. <source><italic>The Journal of Politics</italic></source>, <volume>84</volume>(<issue>2</issue>), <fpage>798</fpage>–<lpage>813</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1086/715600">https://doi.org/10.1086/715600</ext-link></mixed-citation></ref>
      <ref id="bib.bibx98"><mixed-citation publication-type="journal"><string-name><surname>Stone</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Sieck</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Bull</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Frank Yates</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Parks</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Rush</surname>, <given-names>C.</given-names></string-name> (<year>2003</year>). <article-title>Foreground:background salience: Explaining the effects of graphical displays on risk avoidance</article-title>. <source><italic>Organizational Behavior and Human Decision Processes</italic></source>, <volume>90</volume>(<issue>1</issue>), <fpage>19</fpage>–<lpage>36</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/S0749-5978(03)00003-7">https://doi.org/10.1016/S0749-5978(03)00003-7</ext-link></mixed-citation></ref>
      <ref id="bib.bibx99"><mixed-citation publication-type="journal"><string-name><surname>Tang</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Schmeichel</surname>, <given-names>B.</given-names></string-name> (<year>2015</year>). <article-title>Look me in the eye: Manipulated eye gaze affects dominance mindsets</article-title>. <source><italic>Journal of Nonverbal Behavior</italic></source>, <volume>39</volume>(<issue>2</issue>), <fpage>181</fpage>–<lpage>194</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1007/s10919-015-0206-8">https://doi.org/10.1007/s10919-015-0206-8</ext-link></mixed-citation></ref>
      <ref id="bib.bibx100"><mixed-citation publication-type="book"><string-name><surname>Torres</surname>, <given-names>M.</given-names></string-name> (<year>2023</year>). <chapter-title>A framework for the unsupervised and semi-supervised analysis of visual frames</chapter-title>. In <source><italic>Political Analysis</italic></source> (pp. <fpage>1</fpage>–<lpage>22</lpage>).  <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1017/pan.2023.32">https://doi.org/10.1017/pan.2023.32</ext-link></mixed-citation></ref>
      <ref id="bib.bibx101"><mixed-citation publication-type="journal"><string-name><surname>Trachtenberg</surname>, <given-names>A.</given-names></string-name> (<year>1985</year>). <article-title>Albums of war: On reading Civil War photographs</article-title>. <source><italic>Representations</italic></source>, <volume>9</volume>, <fpage>1</fpage>–<lpage>32</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx102"><mixed-citation publication-type="book"><string-name><surname>Turner</surname>, <given-names>V.</given-names></string-name> (<year>1974</year>). <source><italic>Dramas, fields, and metaphors: Symbolic action in human society</italic></source>. <publisher-name>Cornell University Press</publisher-name>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.2307/3710621">https://doi.org/10.2307/3710621</ext-link></mixed-citation></ref>
      <ref id="bib.bibx103"><mixed-citation publication-type="book"><string-name><surname>Turner</surname>, <given-names>V.</given-names></string-name> (<year>1979</year>). <source><italic>Process, performance, and pilgrimage: A study in comparative symbology</italic></source>. <publisher-name>Concept Publishing Company</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx104"><mixed-citation publication-type="journal"><string-name><surname>Walter</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Ophir</surname>, <given-names>Y.</given-names></string-name> (<year>2024</year>). <article-title>Meta-theorizing framing in communication research (1992–2022): Toward academic silos or professionalized specialization?</article-title>. <source><italic>Journal of Communication</italic></source>, <volume>jqad043</volume>,  <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1093/joc/jqad043">https://doi.org/10.1093/joc/jqad043</ext-link></mixed-citation></ref>
      <ref id="bib.bibx105"><mixed-citation publication-type="journal"><string-name><surname>Warrick</surname>, <given-names>J.</given-names></string-name> (<year>2016</year>). <article-title>Black flags: The rise of ISIS</article-title>. <source><italic>Anchor Books</italic></source>, </mixed-citation></ref>
      <ref id="bib.bibx106"><mixed-citation publication-type="journal"><string-name><surname>Wignell</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Tan</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>O’Halloran</surname>, <given-names>K.</given-names></string-name> (<year>2017</year>). <article-title>Under the shade of AK47s: A multimodal approach to violent extremist recruitment strategies for foreign fighters</article-title>. <source><italic>Critical Studies on Terrorism</italic></source>, <volume>10</volume>(<issue>3</issue>), <fpage>429</fpage>–<lpage>452</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/17539153.2017.1319319">https://doi.org/10.1080/17539153.2017.1319319</ext-link></mixed-citation></ref>
      <ref id="bib.bibx107"><mixed-citation publication-type="book"><string-name><surname>Williams</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Casas</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Wilkerson</surname>, <given-names>J.</given-names></string-name> (<year>2020</year>). <source><italic>Images as data for social science research: An introduction to convolutional neural nets for image classification</italic></source>. <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx108"><mixed-citation publication-type="book"><string-name><surname>Wilson</surname>, <given-names>L.</given-names></string-name> (<year>2017</year>). <source><italic>Understanding the Appeal of ISIS</italic></source>. <publisher-name>Cambridge UP</publisher-name>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.17863/CAM.68729">https://doi.org/10.17863/CAM.68729</ext-link></mixed-citation></ref>
      <ref id="bib.bibx109"><mixed-citation publication-type="journal"><string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>El Damanhoury</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Dicker</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Lemieux</surname>, <given-names>A.</given-names></string-name> (<year>2018</year>). <article-title>Images of death and dying in ISIS media: A comparison of English and Arabic print publications</article-title>. <source><italic>Media, War &amp; Conflict</italic></source>, <volume>12</volume>(<issue>3</issue>), <fpage>248</fpage>–<lpage>262</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/1750635217746200">https://doi.org/10.1177/1750635217746200</ext-link></mixed-citation></ref>
      <ref id="bib.bibx110"><mixed-citation publication-type="journal"><string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>El-Damanhoury</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Dicker</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Luu</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Kaczkowski</surname>, <given-names>W.</given-names></string-name>, &amp; <string-name><surname>El-Karhili</surname>, <given-names>N.</given-names></string-name> (<year>2019</year>). <article-title>Considering the military-media nexus from the perspective of competing groups: the case of ISIS and al-Qaeda in the Arabian Peninsula</article-title>. <source><italic>Dynamics of Asymmetric Conflict</italic></source>, <volume>13</volume>(<issue>1</issue>), <fpage>3</fpage>–<lpage>23</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/17467586.2019.1630744">https://doi.org/10.1080/17467586.2019.1630744</ext-link></mixed-citation></ref>
      <ref id="bib.bibx111"><mixed-citation publication-type="journal"><string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>McMinimy</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>El Damanhoury</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Almahmoud</surname>, <given-names>M.</given-names></string-name> (<year>2021</year>). <article-title>Shifts in the visual media campaigns of AQAP and ISIS after high death and high publicity attacks</article-title>. <source><italic>Behavioral Sciences of Terrorism and Political Aggression</italic></source>, <volume>13</volume>(<issue>4</issue>), <fpage>251</fpage>–<lpage>264</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/19434472.2020.1759674">https://doi.org/10.1080/19434472.2020.1759674</ext-link></mixed-citation></ref>
      <ref id="bib.bibx112"><mixed-citation publication-type="journal"><string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name> (<year>2022</year>). <article-title>Revisiting representative form in ISIS media: How emerging collectives reconstitute communities in the 21st century</article-title>. <source><italic>Frontiers in Communication</italic></source>, <volume>7</volume>,  <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.3389/fcomm.2022.968302">https://doi.org/10.3389/fcomm.2022.968302</ext-link></mixed-citation></ref>
      <ref id="bib.bibx113"><mixed-citation publication-type="book"><string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Damanhoury</surname>, <given-names>K.</given-names></string-name> (<year>2022</year>). <source><italic>Proto-state Media Systems: Al-Qaeda and ISIS as Exemplars</italic></source>. <publisher-name>Oxford University Press</publisher-name>.</mixed-citation></ref>
      <ref id="bib.bibx114"><mixed-citation publication-type="journal"><string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Dewick</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Luu</surname>, <given-names>Y.</given-names></string-name>, &amp; <string-name><surname>Kaczkowski</surname>, <given-names>W.</given-names></string-name> (<year>2019</year>). <article-title>Dynamic/Static Image Use in ISIS’s Media Campaign: An Audience Involvement Strategy for Achieving Goals</article-title>. <source><italic>Terrorism and Political Violence</italic></source>, <volume>33</volume>(<issue>6</issue>), <fpage>1323</fpage>–<lpage>1341</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/09546553.2019.1608953">https://doi.org/10.1080/09546553.2019.1608953</ext-link></mixed-citation></ref>
      <ref id="bib.bibx115"><mixed-citation publication-type="journal"><string-name><surname>Winkler</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>El Damanhoury</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Dicker</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Lemieux</surname>, <given-names>A.</given-names></string-name> (<year>2016</year>). <article-title>The medium is terrorism: Transformation of the about to die trope in Dabiq</article-title>. <source><italic>Terrorism and Political Violence</italic></source>, <volume>31</volume>(<issue>2</issue>), <fpage>224</fpage>–<lpage>243</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/09546553.2016.1211526">https://doi.org/10.1080/09546553.2016.1211526</ext-link></mixed-citation></ref>
      <ref id="bib.bibx116"><mixed-citation publication-type="journal"><string-name><surname>Winter</surname>, <given-names>C.</given-names></string-name> (<year>2018</year>). <article-title>Apocalypse, later: a longitudinal study of the Islamic State brand</article-title>. <source><italic>Critical Studies in Media Communication</italic></source>, <volume>35</volume>(<issue>1</issue>), <fpage>103</fpage>–<lpage>121</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/15295036.2017.1393094">https://doi.org/10.1080/15295036.2017.1393094</ext-link></mixed-citation></ref>
      <ref id="bib.bibx117"><mixed-citation publication-type="journal"><string-name><surname>Zelin</surname>, <given-names>A.</given-names></string-name> (<year>2015</year>). <article-title>Picture Or It Didn’t Happen: A Snapshot of the Islamic State’s Official Media Output</article-title>. <source><italic>Perspectives on Terrorism</italic></source>, <volume>9</volume>(<issue>4</issue>), <fpage>85</fpage>–<lpage>97</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx118"><mixed-citation publication-type="book"><string-name><surname>Zelin</surname>, <given-names>A.</given-names></string-name> (<year>2021</year>).  <ext-link ext-link-type="uri" xlink:href="https://jihadology.net/">https://jihadology.net/</ext-link></mixed-citation></ref>
      <ref id="bib.bibx119"><mixed-citation publication-type="book"><string-name><surname>Zelizer</surname>, <given-names>B.</given-names></string-name> (<year>2010</year>). <source><italic>About to die: How news images move the public</italic></source>. <publisher-name>Oxford University Press</publisher-name>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/10584609.2012.641782">https://doi.org/10.1080/10584609.2012.641782</ext-link></mixed-citation></ref>
      <ref id="bib.bibx120"><mixed-citation publication-type="journal"><string-name><surname>Zhang</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Pan</surname>, <given-names>J.</given-names></string-name> (<year>2019</year>). <article-title>CASM: A Deep-Learning Approach for Identifying Collective Action Events with Text and Image Data from Social Media</article-title>. <source><italic>Sociological Methodology</italic></source>, <volume>49</volume>(<issue>1</issue>), <fpage>1</fpage>–<lpage>57</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/0081175019860244">https://doi.org/10.1177/0081175019860244</ext-link></mixed-citation></ref>
      <ref id="bib.bibx121"><mixed-citation publication-type="journal"><string-name><surname>Zhang</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Peng</surname>, <given-names>Y.</given-names></string-name> (<year>2022</year>). <article-title>Image Clustering: An Unsupervised Approach to Categorize Visual Data in Social Science Research</article-title>. <source><italic>Sociological Methods &amp; Research</italic></source>, <volume>004912412210826</volume>,  <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/00491241221082603">https://doi.org/10.1177/00491241221082603</ext-link></mixed-citation></ref>
      <ref id="bib.bibx122"><mixed-citation publication-type="journal"><string-name><surname>Zhang</surname>, <given-names>X.</given-names></string-name>, &amp; <string-name><surname>Dahu</surname>, <given-names>W.</given-names></string-name> (<year>2019</year>). <article-title>Application of artificial intelligence algorithms in image processing</article-title>. <source><italic>Journal of Visual Communication and Image Representation</italic></source>, <volume>61</volume>, <fpage>42</fpage>–<lpage>49</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.jvcir.2019.03.004">https://doi.org/10.1016/j.jvcir.2019.03.004</ext-link></mixed-citation></ref>
      <ref id="bib.bibx123"><mixed-citation publication-type="journal"><string-name><surname>Zou</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Schiebinger</surname>, <given-names>L.</given-names></string-name> (<year>2018</year>). <article-title>AI can be sexist and racist—It’s time to make it fair</article-title>. <source><italic>Nature</italic></source>, <volume>559</volume>, <fpage>324</fpage>–<lpage>326</lpage>.</mixed-citation></ref>
    </ref-list>
    <app-group>
      <app id="A1">
      
      
      
      
    <title>Appendix A Manual content coding</title><table-wrap id="A1.T1"><label>Table A1:</label><caption><title>Justification for coding instrument categories with examples</title></caption>
        
        
        
        
      
<table>
<thead>
<tr>
<th><p /></th>
<th />
<th /></tr>
<tr>
<th><p><bold>Coding categories</bold></p></th>
<th><p><bold>Relationship to visual literature</bold></p></th>
<th><p><bold>Examples</bold></p></th></tr>
</thead>
<tbody>
<tr>
<td colspan="3"><bold>Denotative framing elements</bold></td></tr>
<tr>
<td><p><bold>Militants</bold></p></td>
<td><p>Studies of ISIS and al-Qaeda magazines show the frequent presence of “martyrs” and fighters, with different causes and outcomes for in-group and out-group combatants (Saltman &amp; Smith, <xref rid="bib.bibx93" ref-type="bibr">2015</xref>; Winkler et al., <xref rid="bib.bibx109" ref-type="bibr">2018</xref>, <xref rid="bib.bibx114" ref-type="bibr">2019</xref>; Winter, <xref rid="bib.bibx116" ref-type="bibr">2018</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/militants_alnaba10.svg" /></p><p><italic>Al-Naba (10)</italic></p>
 <p><graphic xlink:href="figures/appendixa/militants_almasra_45.svg" /></p><p><italic>Al-Masra (45)</italic></p></td></tr>
<tr>
<td><p><bold>Death</bold></p></td>
<td><p>Graphic images of death in wartime highlight the consequences of conflict and who holds the authority to decide who lives or dies (Höijer, <xref rid="bib.bibx49" ref-type="bibr">2004</xref>; Carlin, <xref rid="bib.bibx12" ref-type="bibr">2012</xref>). Work on ISIS/AQ magazines shows “about to die” images are more frequent than depictions of dead bodies (Winkler et al., <xref rid="bib.bibx109" ref-type="bibr">2018</xref>; Hanusch, <xref rid="bib.bibx46" ref-type="bibr">2013</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/death_dabiq_15.svg" /></p><p><italic>Dabiq (15)</italic></p>
 <p><graphic xlink:href="figures/appendixa/death_jihadirecollections_1.svg" /></p><p><italic>Jihadi Recollections (1)</italic></p></td></tr>
<tr>
<td><p><bold>Humans</bold></p></td>
<td><p>The number of humans in images denotes brotherhood, individual agency, or intended targets (Wilson, <xref rid="bib.bibx108" ref-type="bibr">2017</xref>; Winkler &amp; Damanhoury, <xref rid="bib.bibx113" ref-type="bibr">2022</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/humans_alnaba_23.svg" /></p><p><italic>Al-Naba (23)</italic></p>
 <p><graphic xlink:href="figures/appendixa/humans_inspire_3.svg" /></p><p><italic>Inspire (3)</italic></p></td></tr>
<tr>
<td><p><bold>Destruction</bold></p></td>
<td><p>Explosions and destruction of infrastructure or religious iconography are used to instill fear and acquiescence (Winkler et al., <xref rid="bib.bibx109" ref-type="bibr">2018</xref>, <xref rid="bib.bibx114" ref-type="bibr">2019</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/destruction_dabiq_11.svg" /></p><p><italic>Dabiq (11)</italic></p>
 <p><graphic xlink:href="figures/appendixa/destruction_almasra_4.svg" /></p><p><italic>Al-Masra (4)</italic></p></td></tr>
<tr>
<td><p><bold>Leaders</bold></p></td>
<td><p>Images of leaders denote enemies, potential allies, and intra-jihadist conflict following ISIS’s break from al-Qaeda (Winkler &amp; Damanhoury, <xref rid="bib.bibx113" ref-type="bibr">2022</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/leaders_rumiyah_1.svg" /></p><p><italic>Rumiyah (1)</italic></p>
 <p><graphic xlink:href="figures/appendixa/leaders_jihadirecollection_4.svg" /></p><p><italic>Jihadi Recollections (4)</italic></p></td></tr>
<tr>
<td><p><bold>Flags</bold></p></td>
<td><p>Flags are used to recruit supporters and anchor enmity toward rival states and groups (Karhili et al., <xref rid="bib.bibx31" ref-type="bibr">2021</xref>; Warrick, <xref rid="bib.bibx105" ref-type="bibr">2016</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/flags_dabiq_13.svg" /></p><p><italic>Dabiq (13)</italic></p>
 <p><graphic xlink:href="figures/appendixa/flags_almasra_12.svg" /></p><p><italic>Al-Masra (12)</italic></p></td></tr>
<tr>
<td colspan="3"><bold>Semiotic framing elements</bold></td></tr>
<tr>
<td><p><bold>Viewer position</bold></p></td>
<td><p>Camera angle shapes perceived strength and credibility (Kraft, <xref rid="bib.bibx56" ref-type="bibr">1986</xref>; McCain et al., <xref rid="bib.bibx71" ref-type="bibr">1977</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/viewerposition_alnaba_123.svg" /></p><p><italic>Al-Naba (123)</italic></p>
 <p><graphic xlink:href="figures/appendixa/viewerposition_inspire_2.svg" /></p><p><italic>Inspire (2)</italic></p></td></tr>
<tr>
<td><p><bold>Image position</bold></p></td>
<td><p>Foregrounding increases salience while backgrounding provides contextual grounding (Stone et al., <xref rid="bib.bibx98" ref-type="bibr">2003</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/imageposition_dabiq_10.svg" /></p><p><italic>Dabiq (10)</italic></p>
 <p><graphic xlink:href="figures/appendixa/imageposition_almasra_14.svg" /></p><p><italic>Al-Masra (14)</italic></p></td></tr>
<tr>
<td><p><bold>Viewer distance</bold></p></td>
<td><p>Intimate versus public distance implies relational closeness or collective anonymity (Jewitt &amp; Oyama, <xref rid="bib.bibx52" ref-type="bibr">2008</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/viewerdistance_rumiyah_12.svg" /></p><p><italic>Rumiyah (12)</italic></p>
 <p><graphic xlink:href="figures/appendixa/viewerdistance_inspire_11.svg" /></p><p><italic>Inspire (11)</italic></p></td></tr>
<tr>
<td><p><bold>Eye contact</bold></p></td>
<td><p>Direct gaze communicates dominance while averted gaze suggests reduced confrontation (Tang &amp; Schmeichel, <xref rid="bib.bibx99" ref-type="bibr">2015</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/eyecontact_dabiq_6.svg" /></p><p><italic>Dabiq (6)</italic></p>
 <p><graphic xlink:href="figures/appendixa/eyecontact_jihadirecollections_1.svg" /></p><p><italic>Jihadi Recollections (1)</italic></p></td></tr>
<tr>
<td><p><bold>Facial expressions</bold></p></td>
<td><p>Positive expressions promote trust whereas negative expressions prompt skepticism (Forgas &amp; East, <xref rid="bib.bibx36" ref-type="bibr">2008</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/facialexpression_alnaba_16.svg" /></p><p><italic>Al-Naba (16)</italic></p>
 <p><graphic xlink:href="figures/appendixa/facialexpression_inspire_4.svg" /></p><p><italic>Inspire (4)</italic></p></td></tr>
<tr>
<td><p><bold>Stance</bold></p></td>
<td><p>Body posture signals submissiveness or strength and control (Green, <xref rid="bib.bibx42" ref-type="bibr">2015</xref>; Jewitt &amp; Oyama, <xref rid="bib.bibx52" ref-type="bibr">2008</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/stance_rumiyah_10.svg" /></p><p><italic>Rumiyah (10)</italic></p>
 <p><graphic xlink:href="figures/appendixa/stance_inspire_15.svg" /></p><p><italic>Inspire (15)</italic></p></td></tr>
<tr>
<td colspan="3"><bold>Connotative framing elements</bold></td></tr>
<tr>
<td><p><bold>State-building</bold></p></td>
<td><p>Images of governance, services, markets, and territory signal institutional capacity and caliphate aspirations (Damanhoury, <xref rid="bib.bibx24" ref-type="bibr">2022</xref>; Karhili et al., <xref rid="bib.bibx31" ref-type="bibr">2021</xref>; Lokmanoglu, <xref rid="bib.bibx64" ref-type="bibr">2020</xref>; Winkler et al., <xref rid="bib.bibx110" ref-type="bibr">2019</xref>; Zelin, <xref rid="bib.bibx117" ref-type="bibr">2015</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/statebuilding_alnaba_2.svg" /></p><p><italic>Al-Naba (2)</italic></p>
 <p><graphic xlink:href="figures/appendixa/statebuilding_almasra_7.svg" /></p><p><italic>Al-Masra (7)</italic></p></td></tr>
<tr>
<td><p><bold>Law enforcement</bold></p></td>
<td><p>Images of capturing and punishing lawbreakers reinforce deterrence and community policing narratives (Barr &amp; Herfroy-Mischler, <xref rid="bib.bibx5" ref-type="bibr">2017</xref>; Chouliaraki &amp; Kissas, <xref rid="bib.bibx20" ref-type="bibr">2018</xref>; Damanhoury &amp; Winkler, <xref rid="bib.bibx25" ref-type="bibr">2018</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/lawenforcement_dabiq_4.svg" /></p><p><italic>Dabiq (4)</italic></p>
 <p><graphic xlink:href="figures/appendixa/lawenforcement_almasra_17.svg" /></p><p><italic>Al-Masra (17)</italic></p></td></tr>
<tr>
<td><p><bold>Allegiance pledges</bold></p></td>
<td><p>Allegiance imagery visualizes coalition-building and collective solidarity (Gregg, <xref rid="bib.bibx43" ref-type="bibr">2023</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/allegiencepledges_alnaba_12.svg" /></p><p><italic>Al-Naba (12)</italic></p>
 <p><graphic xlink:href="figures/appendixa/allegiencepledges_almasra_36.svg" /></p><p><italic>Al-Masra (36)</italic></p></td></tr>
<tr>
<td><p><bold>Media propaganda</bold></p></td>
<td><p>Infographics and posters function as authority strategies and publicity for media products (Glausch, <xref rid="bib.bibx39" ref-type="bibr">2020</xref>; Abdelrahim, <xref rid="bib.bibx1" ref-type="bibr">2019</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/mediapropaganda_dabiq_13.svg" /></p><p><italic>Dabiq (13)</italic></p>
 <p><graphic xlink:href="figures/appendixa/mediapropaganda_inspire_11.svg" /></p><p><italic>Inspire (11)</italic></p></td></tr>
<tr>
<td><p><bold>About to die</bold></p></td>
<td><p>“About to die” images are especially circulation-prone and vary by certainty and probability (Zelizer, <xref rid="bib.bibx119" ref-type="bibr">2010</xref>; Winkler et al., <xref rid="bib.bibx115" ref-type="bibr">2016</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/abouttodie_rumiyah_3.svg" /></p><p><italic>Rumiyah (3)</italic></p>
 <p><graphic xlink:href="figures/appendixa/abouttodie_inspire_16.svg" /></p><p><italic>Inspire (16)</italic></p></td></tr>
<tr>
<td><p><bold>Religion</bold></p></td>
<td><p>Religious gestures and iconography signal piety, legitimacy, and ideological commitment (Winkler, <xref rid="bib.bibx112" ref-type="bibr">2022</xref>; Heck, <xref rid="bib.bibx48" ref-type="bibr">2017</xref>).</p></td>
<td><p><graphic xlink:href="figures/appendixa/religion_dabiq_7.svg" /></p><p><italic>Dabiq (7)</italic></p>
 <p><graphic xlink:href="figures/appendixa/religion_jihadirecollections_2.svg" /></p><p><italic>Jihadi Recollections (2)</italic></p></td></tr>
</tbody>
</table></table-wrap></app>
      <app id="A2">
      
      
      
      
    <title>Appendix B Computational analysis</title><sec id="A2.SS1">
        
        
        
        
      <title>B1.ChatGPT-4o prompt</title><code>“You are labeling a single image. Return ONLY a minified JSON object with integer codes (no extra text).
Conventions:
- 0 = Not applicable (structurally skipped by dependency rules)
- 99 = Unclear / cannot determine
- “Mixed” means multiple mutually exclusive states apply in the same image.

Counting &amp; visibility rules:
- Count a human if ≥25% of head/torso is visible (silhouettes count if clearly human).
- Small group = 2–10 inclusive; Large group = ≥11.
- Eye direction: Up/Down if head pitch is ~≥15° from neutral; otherwise Eye Level.

Dependencies:
- If Humans=4 (No humans), then FacialExpressions, EyeContact, Stance = 0.
- If Death=3 (N/A), then and AboutToDie=0.
- If AboutToDie ∈ {1,2,3}, then Death ≠ 3.
- “State” = actions/activities

Categories and codes:
Distance (viewer distance):
1 Intimate (face/neck close-up)
2 Personal (~1.5–4 ft, waist-up)
3 Social/Public (&gt;4 ft)
4 Mixed
5 Unknowable (e.g., infographics, maps)
99 Unclear

ViewerPosition:
1 Looking up
2 Looking down
3 Eye level
4 Mixed
99 Unclear

Humans:
1 One
2 Small group (2–10)
3 Large group (≥11)
4 No humans
99 Unclear

MilitaryRole (relationship of depicted humans to any military/armed group):
1 Martyr (post-mortem)
2 Armed group member (group A example: “IS”)
3 Armed group member (non-A)
4 Future recruits (children in uniform/with weapons)
5 Mixed (1–4)
6 No military role present
99 Unclear

Leaders (visibly depicted or clearly referenced by portrait/statue/backdrop):
1 Jihad/Extremist leaders
2 Western state leaders
3 Arab state leaders
4 Asian/Russian state leaders
5 Shiite/Tribal/Other Muslim group leaders
6 Mixed (1–5)
7 No leaders present
99 Unclear

FacialExpressions:
1 Positive
2 Negative
3 Mixed
4 No humans (structural N/A → use 0 if Humans=4)
99 Unclear

EyeContact:
1 Looking at viewer
2 Looking upward
3 Looking downward
4 Looking at other person/object
5 Eyes closed/not visible
6 No humans (structural N/A → use 0 if Humans=4)
99 Unclear

Stance:
1 On knees (not praying)
2 Sitting (default if riding)
3 Standing
4 Lying down
5 Praying
6 Mixed (1–5)
7 No humans (structural N/A → use 0 if Humans=4)
99 Unclear

Death:
1 About to die
2 Dead
3 Not applicable (no death present)
99 Unclear

State (actions/activities):
1 Social services/infrastructure use (education, health)
2 Law enforcement/punishment
3 Maps (not infographics)
4 Local markets/economy/agriculture
5 Passports
6 Natural landscape (pristine)
7 Pledging allegiance to Group A (e.g., “IS”)
8 Other extremist-state propaganda (banners, hashtags)
9 Mixed (1–8, excluding 3 if purely graphical)
10 Not applicable (no state actions)
99 Unclear

Religion (visible practices/objects):
1 One finger pointing upward (tawhid gesture)
2 Reading scripture
3 Praying
4 Hajj (Kaaba)
5 Religious iconography/shrines
6 Mixed (1–5)
7 Not applicable (no religious content)
99 Unclear

Flag:
1 Extremist group flag (e.g., Group A)
2 U.S. flag
3 MENA state flag
4 Other flags
5 Multiple flags
6 Not applicable (no flags)
99 Unclear

Destruction:
1 Active (fire/explosion)
2 Aftermath (destroyed structures/iconography)
3 Not applicable (no destruction)
99 Unclear

AboutToDie (ONLY if applicable):
1 Possible death (no confirmed kill)
2 Certain death (confirmed kill imminent)
3 Presumed death (weapons/destruction implying lethal risk)
0 Not applicable
99 Unclear

Return JSON with this schema:
{
 "Distance": int, "ViewerPosition": int, "Humans": int, "MilitaryRole": int, "Leaders": int, "FacialExpressions": int, "EyeContact": int, "Stance": int,
 "Death": int, "State": int, "Religion": int, "Flag": int,
 "Destruction": int, "AboutToDie": int}
Optionally include "confidence": {&lt;same keys&gt;: float 0-1}”
</code><fig id="A2.F1"><label>Figure B1:</label><caption><title>Distribution of codes by variable and coder</title></caption>
          
          
          
          
        <graphic xlink:href="figures/appendixb/figureB1.svg" /></fig><table-wrap id="A2.T1"><label>Table B1:</label><caption><title>Precision, recall, and weighted performance metrics for each variable</title></caption>
          
          
          
          
        <table-wrap-foot><p>!</p></table-wrap-foot></table-wrap></sec></app>
      <app id="A3">
      
      
      
      
    <title>Appendix C Image-context relationship</title><table-wrap id="A3.T1"><label>Table C1:</label><caption><title>Context Relationships that Correlate with Increased or Decreased Levels of Image Types</title></caption>
        
        
        
        
      <table-wrap-foot><p>!</p></table-wrap-foot></table-wrap></app>
    </app-group>
  </back>
</article>