<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (NISO Z39.96-2019) Journal Publishing DTD v1.2 20190208//EN" "https://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="1.2" xml:lang="en" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <front>
    <journal-meta>
  <journal-id journal-id-type="publisher-id">CCR</journal-id>
  <journal-title-group>
    <journal-title>Computational Communication Research</journal-title>
  </journal-title-group>
  <issn pub-type="ppub" />
  <issn pub-type="epub">2665-9085</issn>
  <publisher>
    <publisher-name>Amsterdam University Press</publisher-name>
    <publisher-loc>Amsterdam</publisher-loc>
  </publisher>
</journal-meta><article-meta>
      <article-id pub-id-type="publisher-id">CCR2026.1.1.HAGA</article-id><article-id pub-id-type="doi">10.5117/CCR2026.1.1.HAGA</article-id><article-categories><subj-group subj-group-type="heading"><subject>Article</subject></subj-group></article-categories><title-group>
        <article-title>Topic Classification of News Articles from URLs Alone</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <name>
            <surname>Hagar</surname>
            <given-names>Nick</given-names>
          </name>
          <aff>Department of Communication Studies, Northwestern University, USA</aff>
        </contrib>
      </contrib-group>
      <pub-date pub-type="epub"><year>2026</year></pub-date><volume>8</volume><issue>1</issue><fpage>1</fpage><permissions><copyright-statement>© The authors</copyright-statement><copyright-year>2026</copyright-year><copyright-holder>The authors</copyright-holder><license license-type="open-access"><license-p>This is an open access article distributed under the CC BY 4.0 license <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link></license-p></license></permissions><abstract>
    <title>Abstract</title><p>This paper presents a novel approach to classifying news articles by topic using only their URLs, addressing growing challenges in accessing article text due to paywalls and scraping restrictions. By fine-tuning a DistilBERT transformer model on URL data alone, I demonstrate topic classification performance that matches or exceeds traditional approaches requiring article text. Across three benchmark datasets spanning multiple languages and over 660,000 articles from more than 11,000 news domains, this URL-based topic classifier achieved superior F1 scores compared to both conventional machine learning methods and existing URL-based techniques. While this method requires more computational resources than simpler topic classification approaches, it dramatically reduces data collection requirements, offering researchers a practical alternative when text access is limited. These findings suggest that news article URLs contain richer semantic information than previously recognized, opening new possibilities for large-scale news content analysis in increasingly restrictive digital environments.</p>
  </abstract>
      <kwd-group>
        <title>Keywords:</title><kwd>URL classification</kwd>
        <kwd>news classification</kwd>
        <kwd>natural language processing</kwd>
        <kwd>machine learning</kwd>
        <kwd>data access</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="S1">
      
      
      
      
    <title>Introduction</title><p id="S1.p1">Classifying news articles is a common research task, enabling segmentation by topic, news type, domain relevance, or writing style (León et al., <xref rid="bib.bibx13" ref-type="bibr">2023</xref>; Bakshy et al., <xref rid="bib.bibx1" ref-type="bibr">2015</xref>; Flaxman et al., <xref rid="bib.bibx16" ref-type="bibr">2016</xref>; León &amp; Trilling, <xref rid="bib.bibx11" ref-type="bibr">2021</xref>). While machine learning has greatly expanded classification capacity, it presents notable challenges around computational resources and data access.</p><p id="S1.p2"><italic>Compute</italic> difficulties center around the demand for computational resources that machine learning models introduce. Whereas simple statistical approaches, heuristic rules, and hand labeling have negligible resource demands, training and running a machine learning model can be beyond the capabilities of typical consumer-grade computer hardware. The need for specialized equipment adds a barrier to entry that may prevent researchers from being able to apply state-of-the-art methods to their topic classification problems.</p><p id="S1.p3">But even when using a sufficiently optimized model, or sufficiently powerful hardware, access to <italic>data</italic> is a necessary and challenging prerequisite for classifying news articles at scale. Amassing a large dataset of news articles presents a range of technical and policy obstacles. At a technical level, researchers must develop a web scraping approach that accurately extracts the information of interest (e.g., headline, article text, publication date) from an HTML document, across multiple types of documents, potentially from multiple websites with their own structure and document layout. The need to identify, download, and process articles across hundreds to thousands of web pages is cumbersome, even for researchers with significant technical expertise (Freelon, <xref rid="bib.bibx17" ref-type="bibr">2018</xref>).</p><p id="S1.p4">Given the challenges of data access, less resource-intensive topic classification methods are desirable. Building on prior work that has leveraged URLs’ semantic information (León &amp; Vermeer, <xref rid="bib.bibx12" ref-type="bibr">2023</xref>; Flaxman et al., <xref rid="bib.bibx16" ref-type="bibr">2016</xref>; Guess, <xref rid="bib.bibx20" ref-type="bibr">2021</xref>), this paper proposes classifying news articles using only their URLs. Specifically, by fine-tuning a DistilBERT model on URL data, I demonstrate performance that is not only comparable to methods using full article text but can even outperform models trained on headlines or article excerpts. This data-efficient approach offers a practical solution for news topic classification amid increasing digital access constraints.</p></sec>
    <sec id="S2">
      
      
      
      
    <title>Background</title><p id="S2.p1">News article classification underpins a wide range of common analyses in communication research. For researchers, the ability to sort news coverage (e.g., by topic, or by whether or not articles represent political coverage) enables them to make comparisons across groups, filter down to a salient subset of articles for analysis, or quantify the representation of a particular kind of coverage in a a larger corpus. But while machine learning approaches to classification can be applied at scale to large corpora, they generally require text from each article to make reliable predictions. This text requirement is cumbersome for researchers, both from the technical perspective of large-scale web scraping, and the perspective of gaining access to news articles in an increasingly restrictive data environment. To address these challenges, I draw from research in machine learning techniques that classify web pages solely based on their URLs, and explore a potential application to news articles.</p><sec id="S2.SS1">
        
        
        
        
      <title>Classifying news articles</title><p id="S2.SS1.p1">Machine learning classifiers have permeated quantitative study of news coverage. Because these algorithms allow researchers to annotate or filter text at scale, they offer a way to process data without requiring human review of every record in a sample. They are also extremely versatile, capable of classifying text along many dimensions.</p><p id="S2.SS1.p2">In one common use case, researchers use machine learning to classify articles into a set of predetermined topics (Singh et al., <xref rid="bib.bibx41" ref-type="bibr">2020</xref>). Those topics then provide the basis for further quantitative analysis. For example, León et al. (<xref rid="bib.bibx13" ref-type="bibr">2023</xref>) train a classifier to label news articles across six topics, to gauge the relative shift in engagement across coverage areas around elections. Similarly, Chuang &amp; Larochelle (<xref rid="bib.bibx9" ref-type="bibr">2014</xref>) apply topic modeling to the news coverage database Media Cloud, in an effort to analyze coverage areas by news outlet and over time. And Kaiser et al. (<xref rid="bib.bibx26" ref-type="bibr">2019</xref>) apply a topic model to the news coverage produced by far-right outlets in the U.S., to gauge the relative presence or absence of coverage areas across this subset of the media. In these cases, an algorithm allows the researchers to aggregate individual articles into broader coverage areas, either as a way to explore the contents of that coverage or to monitor changes in theoretically interesting types of coverage over time.</p><p id="S2.SS1.p3">For many empirical studies, only a subset of news coverage is relevant to the research question. In these cases, binary classification often provides an important filtering technique to identify a salient sample within a larger news corpus. For instance, researchers might use binary classifiers to limit a corpus to only hard news about politics (Budak et al., <xref rid="bib.bibx6" ref-type="bibr">2016</xref>; Flaxman et al., <xref rid="bib.bibx16" ref-type="bibr">2016</xref>). Similarly, León &amp; Vermeer (<xref rid="bib.bibx12" ref-type="bibr">2023</xref>) employed binary classification to identify political news content across multiple languages and contexts, and Bakshy et al. (<xref rid="bib.bibx1" ref-type="bibr">2015</xref>) utilized binary classification in their study of ideological exposure on Facebook, to identify hard news content. These studies illustrate how binary classification serves as a crucial preliminary step in large-scale news analysis, allowing researchers to efficiently identify relevant subsets of content for more detailed examination.</p><p id="S2.SS1.p4">Finally, algorithmic classification can be useful for identifying more abstract concepts in news coverage. For example, researchers have leveraged a range of algorithmic classification techniques to operationalize news values in articles (Trilling et al., <xref rid="bib.bibx43" ref-type="bibr">2017</xref>; Burggraaff &amp; Trilling, <xref rid="bib.bibx7" ref-type="bibr">2020</xref>; Hagar et al., <xref rid="bib.bibx21" ref-type="bibr">2021</xref>) and to explore automated techniques for measuring news agendas (Korenčić et al., <xref rid="bib.bibx29" ref-type="bibr">2015</xref>). And while sentiment analysis often attempts to place text on a continuous measure from negative to positive (e.g., Hutto &amp; Gilbert, <xref rid="bib.bibx24" ref-type="bibr">2014</xref>), some researchers also operationalize discrete emotional labels via supervised classifiers (León &amp; Trilling, <xref rid="bib.bibx11" ref-type="bibr">2021</xref>).</p><p id="S2.SS1.p5">Together, these approaches highlight the ubiquity of algorithmic classification as an analysis tool in the study of news coverage. Machine learning classifiers are a widely-accepted tool for quantitative communication researchers, often leveraged to label or filter news corpora at a scale that would be infeasible for human review.</p><p id="S2.SS1.p6">Studies that leverage classification use a range of algorithmic approaches. Many researchers take a supervised learning approach, using model architectures such as support vector machines (León &amp; Trilling, <xref rid="bib.bibx11" ref-type="bibr">2021</xref>; Bakshy et al., <xref rid="bib.bibx1" ref-type="bibr">2015</xref>), random forests (Hagar et al., <xref rid="bib.bibx21" ref-type="bibr">2021</xref>), and transformers (Clercq et al., <xref rid="bib.bibx10" ref-type="bibr">2020</xref>). These approaches are appropriate for cases where the topics of interest are known ahead of time. In more exploratory cases, where the goal of the analysis is to identify the types of coverage represented in a corpus, researchers rely on unsupervised topic modeling approaches like LDA (Korenčić et al., <xref rid="bib.bibx29" ref-type="bibr">2015</xref>; Chuang &amp; Larochelle, <xref rid="bib.bibx9" ref-type="bibr">2014</xref>) and STM (Kaiser et al., <xref rid="bib.bibx26" ref-type="bibr">2019</xref>). Each of these techniques generally require some form of text preprocessing (e.g., counting the frequency of each word) to turn documents into numerical vectors.</p><p id="S2.SS1.p7">These algorithms vary in computational complexity and predictive performance, but typically rely on text from news articles as input data. Many researchers use the full article text as features for classifiers (Clercq et al., <xref rid="bib.bibx10" ref-type="bibr">2020</xref>; Trilling et al., <xref rid="bib.bibx43" ref-type="bibr">2017</xref>), while others find headlines provide a more useful signal (Hagar et al., <xref rid="bib.bibx21" ref-type="bibr">2021</xref>; Kuiken et al., <xref rid="bib.bibx30" ref-type="bibr">2017</xref>). In all cases, large-scale algorithmic classification requires some form of news text to determine group membership.</p><p id="S2.SS1.p8">This requirement presents a challenge for researchers. While much news coverage can be collected via web scraping, an increasing number of restrictions limit the kinds of text that are programmatically accessible. For news websites with paywalls, web scrapers often cannot access the full text of articles. And as news publishers have updated their scraping policies in response to the large-scale collection of training data for large language models, web scraping for research that was formerly permissible may no longer be so (Welsh, <xref rid="bib.bibx46" ref-type="bibr">2024</xref>). The articles that researchers are able to classify are therefore limited by what is accessible—through, for example, archival databases like the Internet Archive—which may not be a representative sample (Thelwall &amp; Vaughan, <xref rid="bib.bibx42" ref-type="bibr">2004</xref>). In the absence of reliable access to primary-source records of news coverage, researchers require alternative methodological approaches that still enable the kinds of large-scale sorting and filtering that text classification provides.</p></sec><sec id="S2.SS2">
        
        
        
        
      <title>URL-based web page classification</title><p id="S2.SS2.p1">In some domains, researchers have demonstrated the efficacy of methods that classify web pages without accessing the contents of the pages themselves. These approaches use the same supervised machine learning algorithms, text preprocessing pipelines, and evaluation metrics as the classifiers explored above. But rather than applying them to the text of a web page, these approaches apply them to the contents of the URL.</p><p id="S2.SS2.p2">The URL (uniform resource locator) of a web page is its address on the Internet, which allows computers to retrieve its contents. URLs often contain human-readable descriptive information about their contents. For example, the URL <ext-link xlink:href="huffpost.com/static/about-us" ext-link-type="uri">huffpost.com/static/about-us</ext-link> likely has information about the Huffington Post. This characteristic makes URLs potentially informative for downstream natural language tasks.</p><p id="S2.SS2.p3">URL-based classification is prevalent in domains where accessing web pages directly is either impossible or inadvisable. For example, when identifying malicious websites, detecting harmful pages without visiting them is crucial (Ma et al., <xref rid="bib.bibx34" ref-type="bibr">2009</xref>; Vanhoenshoven et al., <xref rid="bib.bibx44" ref-type="bibr">2016</xref>). URLs may also be the only available textual information for pages containing solely multimedia content (Zhang et al., <xref rid="bib.bibx48" ref-type="bibr">2006</xref>; Baykan et al., <xref rid="bib.bibx3" ref-type="bibr">2011</xref>). In addition, URL-based classifiers can provide faster inference by avoiding the download and processing overhead of full web pages (e.g., in personalized web readers— Baykan et al., <xref rid="bib.bibx3" ref-type="bibr">2011</xref>; Kan &amp; Thi, <xref rid="bib.bibx28" ref-type="bibr">2005</xref>; Hernández et al., <xref rid="bib.bibx23" ref-type="bibr">2014</xref>).</p><p id="S2.SS2.p4">Of course, URL-based classification creates trade-offs. Researchers often leverage such approaches because of their efficiency and practical application, not because of their predictive performance (Hernández et al., <xref rid="bib.bibx23" ref-type="bibr">2014</xref>). URLs tend to be extremely short, and so the information they convey is often limited. In some cases, it is not possible to glean any information about the topic of a web page solely from its URL (e.g., the domain name of a small business without context— Baykan et al., <xref rid="bib.bibx2" ref-type="bibr">2009</xref>). And URLs can be complex and nonstandard, creating challenges from a language processing perspective: Words can be abbreviated, capitalization is irregular or not present, and there is no punctuation to indicate structure (Baykan et al., <xref rid="bib.bibx3" ref-type="bibr">2011</xref>; Kan, <xref rid="bib.bibx27" ref-type="bibr">2004</xref>).</p><p id="S2.SS2.p5">Still, URL-based classification approaches have proven effective on real-world datasets. Numerous researchers report strong predictive performance on datasets ranging from news to malicious web pages to academic websites (Kan, <xref rid="bib.bibx27" ref-type="bibr">2004</xref>; Ma et al., <xref rid="bib.bibx34" ref-type="bibr">2009</xref>; Rajalakshmi &amp; Aravindan, <xref rid="bib.bibx38" ref-type="bibr">2018</xref>; Singh et al., <xref rid="bib.bibx40" ref-type="bibr">2012</xref>). These approaches use a combination of natural language processing techniques and supervised machine learning—many create input features from URLs by splitting them into n-grams of various sizes (e.g., Baykan et al., <xref rid="bib.bibx3" ref-type="bibr">2011</xref>) and by applying token weighting approaches (Rajalakshmi &amp; Aravindan, <xref rid="bib.bibx38" ref-type="bibr">2018</xref>; Hernández et al., <xref rid="bib.bibx22" ref-type="bibr">2012</xref>). They then train a range of classification models—including Naive Bayes classifiers, support vector machines, and random forests—on these features.</p><p id="S2.SS2.p6">While combining token-based featurization with supervised learning algorithms has been the dominant approach, it may not be optimal for URL classification. The field’s focus on tokenization strategies and reliance on support vector machines underscores a fundamental challenge: sparsity in limited text data. This limitation suggests the need for more sophisticated approaches to text representation and modeling.</p><p id="S2.SS2.p7">Transformers offer a promising solution to this challenge. This architecture has demonstrated exceptional performance across diverse sequence modeling tasks, from basic classification to sophisticated generation in large language models (Vaswani et al., <xref rid="bib.bibx45" ref-type="bibr">2017</xref>; Brown et al., <xref rid="bib.bibx5" ref-type="bibr">2020</xref>). While transformers still require text tokenization, their use of dense vector representations directly addresses the sparsity problem that has constrained traditional approaches (Vaswani et al., <xref rid="bib.bibx45" ref-type="bibr">2017</xref>). This characteristic makes them particularly well-suited for URL classification, where the limited text available demands more efficient representation.</p><p id="S2.SS2.p8">One effective transformer for natural language processing tasks is BERT. BERT is a transformer model, which generates a context-aware representation of text sequences (Devlin et al., <xref rid="bib.bibx15" ref-type="bibr">2019</xref>). It has proven effective as a classification model, including for news articles, with performance that surpasses other supervised machine learning methods (Clercq et al., <xref rid="bib.bibx10" ref-type="bibr">2020</xref>). BERT has also shown promise as a URL-based classifier, in the context of detecting malicious web pages (Chang et al., <xref rid="bib.bibx8" ref-type="bibr">2021</xref>).</p><p id="S2.SS2.p9">Combining these threads of research, then, BERT-based text classification has proven effective for labeling news articles. A range of classification techniques, including BERT, have proven effective when trained solely on URLs, across a range of domains. And there have even been some efforts to incorporate news article URLs into classification tasks for communication research. Multiple studies have taken a “distant labeling” approach to classifying news articles, which partially relies on information from the URL (León et al., <xref rid="bib.bibx14" ref-type="bibr">2023</xref>; León &amp; Vermeer, <xref rid="bib.bibx12" ref-type="bibr">2023</xref>). In these studies, researchers map the section contained in a URL to a topic of interest (e.g., a URL with <ext-link xlink:href="/politics/" ext-link-type="uri">/politics/</ext-link> would be a political story, while one with <ext-link xlink:href="/sports/" ext-link-type="uri">/sports/</ext-link> would not), as a first step in a classification pipeline. This approach augments the classification pipeline, producing more accurate inference than a supervised machine learning model on its own.</p><p id="S2.SS2.p10">Given the efficacy of URLs as predictive signals, in news and other domains, these data provide an opportunity to develop less data intensive training sets for news article classifications. By leveraging a state-of-the-art transformer architecture, I argue that it is possible to train a BERT-based model to assign topics to articles based solely on their URLs, without needing access to the articles themselves.</p><p id="S2.SS2.p12"><italic>H1:</italic> A BERT-based URL topic classification model will achieve equal or higher F1 scores compared to traditional supervised machine learning approaches that use news article text.</p></sec></sec>
    <sec id="S3">
      
      
      
      
    <title>Data</title><p id="S3.p1">To evaluate each topic classification approach, I used three publicly-available news topic classification benchmark datasets. These datasets were selected by searching for data on popular repository archives Kaggle, HuggingFace, and the Harvard Dataverse, for projects that 1) consisted of news articles and associated topics, and 2) contained a URL for each article. I evaluated each method on multiple datasets to get a sense of their effectiveness across a range of topic labeling approaches, news domains, and languages. Below is a description of each dataset. Table <xref rid="S3.T1" ref-type="table">1</xref> contains summary statistics for each.</p><table-wrap id="S3.T1"><label>Table 1:</label><caption><title>Descriptive statistics for the filtered subset of the three benchmarking datasets used in this analysis.</title></caption>
        
        
        
        
      
<table>
<thead>
<tr>
<th><p><bold>Dataset</bold></p></th>
<th><bold>Rows</bold></th>
<th><bold>Unique</bold></th>
<th><bold>Topics</bold></th>
<th><p><bold>Gini coefficient</bold></p></th>
<th><p><bold>Topic label</bold></p></th></tr>
<tr>
<th />
<th />
<th><bold>domains</bold></th>
<th />
<th><p><bold>(URLs per domain)</bold></p></th>
<th><p><bold>entropy</bold></p></th></tr>
</thead>
<tbody>
<tr>
<td><p>HuffPost</p></td>
<td>144,167</td>
<td>1</td>
<td>14</td>
<td><p>N/A</p></td>
<td><p>3.47</p></td></tr>
<tr>
<td><p>News Aggregator</p></td>
<td>421,890</td>
<td>10,925</td>
<td>4</td>
<td><p>0.82</p></td>
<td><p>1.89</p></td></tr>
<tr>
<td><p>RecognaSumm</p></td>
<td>98,023</td>
<td>14</td>
<td>7</td>
<td><p>0.61</p></td>
<td><p>2.66</p></td></tr>
</tbody>
</table></table-wrap><p id="S3.p2"><bold>News Category (HuffPost) Dataset:</bold> This is a computational linguistics benchmarking dataset, which consists of about 210,000 news articles from the Huffington Post (Misra, <xref rid="bib.bibx35" ref-type="bibr">2022</xref>). The dataset includes 14 topics, with labels originating from the section in which each article was published (i.e., applied by a human editor or journalist) (Misra, <xref rid="bib.bibx35" ref-type="bibr">2022</xref>). Because this dataset focuses on a single website and categorizes articles into relatively specific groups, it presents an opportunity to evaluate the ability of topic classification approaches to make granular distinctions.</p><p id="S3.p3"><bold>News Aggregator Dataset:</bold> This is a widely-used machine learning benchmark dataset, maintained as part of a dataset repository at the University of California, Irvine (Gasparetti, <xref rid="bib.bibx18" ref-type="bibr">2017</xref>). It contains approximately 400,000 news articles, from multiple websites, across 4 topics (business, science and technology, health, and entertainment). The process used to generate the topic labels in this dataset is not specified in its documentation. These data complement the Huffington Post corpus well: While the topics here are much broader, they are applied across many websites, which is valuable for assessing the generalizability of topic classification methods.</p><p id="S3.p4"><bold>RecognaSumm:</bold> This dataset captures news articles in Portuguese, from Brazilian news outlets (Paiola et al., <xref rid="bib.bibx37" ref-type="bibr">2024</xref>). While the purpose of the dataset is to train machine learning models for text summarization, each record also has a topic label across 7 topics. These topic labels come from the metadata of each news website (Paiola et al., <xref rid="bib.bibx37" ref-type="bibr">2024</xref>). Given the bias present toward English text in many transformer pre-training datasets (Gouvert et al., <xref rid="bib.bibx19" ref-type="bibr">2025</xref>), this dataset is useful for evaluating topic classification approaches for their effectiveness in Portuguese.</p><p id="S3.p5">To offer a comprehensive view of the many ways that article text and URL text might be represented, I trained models on a wide variety of input features, described in Section 3.1.</p><p id="S3.p6">For the HuffPost and RecognaSumm datasets, there are a large number of unique topics with ambiguous overlap (e.g., “WORLDPOST”, “THE WORLDPOST”, and “WORLD NEWS”). To account for this, I removed any records with labels that applied to less than 2% of total articles. The resulting number of records and topics is shown in Table <xref rid="S3.T1" ref-type="table">1</xref>.</p><p id="S3.p7">In addition, it is worth noting that the volume of URLs across domains in the RecognaSumm and News Aggregator datasets is not equal, as indicated by their Gini coefficients. This skew raises the possibility that certain domain names may appear disproportionately in the training dataset and may have imbalanced labels (e.g., <ext-link xlink:href="in.reuters.com" ext-link-type="uri">in.reuters.com</ext-link> accounts for 0.9% of records in the News Aggregator dataset, and 79.5% of its articles have the “business” topic label), potentially biasing results if the classification approaches learn from the domain–topic association. The text input variants described below—in particular, the URL text conditions that remove domain names—are an effort to measure the amount of performance that might be attributed to these associations.</p><p id="S3.p8">I trained a separate instance of each model type for each dataset and input feature, for a total of 139 unique classifiers. I trained each model on 80% of records. I used 10% as a validation set, and held out the remaining 10% as a test set, from which I report performance metrics. Each model used the same splits.</p><sec id="S3.SS1">
        
        
        
        
      <title>Input Feature Conditions</title><p>To comprehensively evaluate the performance of article text versus URL text for topic classification, I trained models on seven distinct input feature conditions:
<bold>Article Text Conditions (baseline comparisons):</bold></p><p><list list-type="order" id="S3.I1"><list-item id="S3.I1.i1">
              
              
              
              
            <p id="S3.I1.i1.p1"><bold>Title only:</bold> Article headline text</p></list-item><list-item id="S3.I1.i2">
              
              
              
              
            <p id="S3.I1.i2.p1"><bold>Title and subtitle:</bold> Combined headline and subtitle text</p></list-item><list-item id="S3.I1.i3">
              
              
              
              
            <p id="S3.I1.i3.p1"><bold>Snippet/description:</bold> Excerpt from the article body (not available in the News Aggregator dataset)</p></list-item></list></p><p><bold>URL Text Conditions (primary focus):</bold></p><p><list list-type="order" id="S3.I2"><list-item id="S3.I2.i4">
              
              
              
              
            <p id="S3.I2.i4.p1"><bold>URL path (cleaned):</bold> URL path with query parameters removed and special characters replaced with spaces, in line with established practices in URL classification research (Baykan et al., <xref rid="bib.bibx3" ref-type="bibr">2011</xref>)</p></list-item><list-item id="S3.I2.i5">
              
              
              
              
            <p id="S3.I2.i5.p1"><bold>URL path (raw):</bold> URL path in its original form, excluding the domain</p></list-item><list-item id="S3.I2.i6">
              
              
              
              
            <p id="S3.I2.i6.p1"><bold>URL (raw):</bold> Complete URL including domain and path</p></list-item></list></p><p><bold>Blended Condition:</bold></p><p><list list-type="order" id="S3.I3"><list-item id="S3.I3.i7">
              
              
              
              
            <p id="S3.I3.i7.p1"><bold>URL + Title + Subtitle:</bold> Combination of full URL with article title and subtitle</p></list-item></list></p><p id="S3.SS1.p4">The “URL path (raw)” condition (#5) represents the primary contribution of this work, demonstrating that topic classification can be achieved using only the path portion of URLs without any article text, domain information, or text cleaning.</p></sec></sec>
    <sec id="S4">
      
      
      
      
    <title>Methods</title><sec id="S4.SS1">
        
        
        
        
      <title>Proposed URL-based Classification Approach</title><p id="S4.SS1.p1">In this work, I propose a news article topic classification pipeline that relies on a BERT-based transformer model, trained on a labeled dataset of URLs<xref rid="id1" ref-type="fn" specific-use="fn"><sup>1</sup></xref>. This approach builds on empirical work demonstrating both the effectiveness of BERT as a classifier for news text (Singh et al., <xref rid="bib.bibx41" ref-type="bibr">2020</xref>; Clercq et al., <xref rid="bib.bibx10" ref-type="bibr">2020</xref>) and the value of the text of news article URLs as input features for machine learning models (León et al., <xref rid="bib.bibx14" ref-type="bibr">2023</xref>).</p><p id="S4.SS1.p2">While I evaluate multiple URL encoding strategies to understand their relative merits, my primary contribution is demonstrating that news article classification can be achieved using only the raw URL path—the portion of the URL after the domain name—as input to a topic classification model. This “URL path (raw)” condition most directly tests whether the semantic information in URL paths alone can enable accurate topic classification.</p><p id="S4.SS1.p3">My analysis pipeline used DistilBERT, a compressed version of BERT that achieves nearly identical performance while using only 60% of the original parameters (Sanh et al., <xref rid="bib.bibx39" ref-type="bibr">2020</xref>). This compression makes the model more efficient, reducing computational resources needed for both fine-tuning and inference. I applied the standard DistilBERT to English-language datasets and DistilBERT-multilingual for other languages (Sanh et al., <xref rid="bib.bibx39" ref-type="bibr">2020</xref>).</p><p id="S4.SS1.p4">To fine-tune these models for topic classification, I used a training dataset of labeled URLs from the benchmarking corpora described above. I used HuggingFace’s Transformers library for fine tuning (Wolf et al., <xref rid="bib.bibx47" ref-type="bibr">2020</xref>).</p></sec><sec id="S4.SS2">
        
        
        
        
      <title>Comparison Models</title><p id="S4.SS2.p1">I compared my topic classification algorithm against a range of other approaches. First, where possible, I implemented the distant labeling algorithm from León et al. (<xref rid="bib.bibx14" ref-type="bibr">2023</xref>). While both approaches classify news articles using URL information, they make different trade-offs. My approach uses a more complex model on a smaller dataset, whereas León et al. (<xref rid="bib.bibx14" ref-type="bibr">2023</xref>) employ a computationally efficient model supplemented by larger datasets and manual annotations. This comparison helps quantify how different approaches balance model complexity, dataset size, and prediction accuracy.</p><p id="S4.SS2.p2">In addition, I implement a standard machine learning classification approach that is often leveraged for processing text data: embedding text, then training an XGBoost classifier on those embeddings (e.g., Jahnavi et al., <xref rid="bib.bibx25" ref-type="bibr">2024</xref>). This approach, and related approaches with other kinds of tree-based ensemble models, encodes the text information from an article into a dense vector. The tree-based models often used for classification are robust to collinearity and capable of capturing non-linear relationships, making them well-suited to use dense vectors as input features (Breiman, <xref rid="bib.bibx4" ref-type="bibr">2001</xref>). To embed article text, I used the gte-Qwen2-1.5B-instruct embedding model (Li et al., <xref rid="bib.bibx32" ref-type="bibr">2023</xref>). At the time of writing, this is a relatively efficient model that performs well on the MTEB text embedding benchmark (Muennighoff et al., <xref rid="bib.bibx36" ref-type="bibr">2023</xref>). It is also capable of embedding text across multiple languages.</p><p>Finally, to provide a comprehensive set of baselines, I also implemented four additional traditional machine learning models. In contrast to the embedding-based XGBoost approach, these classifiers were trained on features generated using Term Frequency-Inverse Document Frequency (TF-IDF) vectorization, a standard technique for converting text into numerical features. The models include:</p><p><list list-type="bullet" id="S4.I1"><list-item id="S4.I1.i1">
              
              
              
            <p id="S4.I1.i1.p1">Logistic Regression: A widely-used linear model that is a common baseline for text classification tasks.</p></list-item><list-item id="S4.I1.i2">
              
              
              
            <p id="S4.I1.i2.p1">Support Vector Machine (SVM): A linear SVM classifier, which is often highly effective for high-dimensional, sparse data like TF-IDF vectors.</p></list-item><list-item id="S4.I1.i3">
              
              
              
            <p id="S4.I1.i3.p1">Gradient Boosting: A tree-based ensemble technique that builds models sequentially, with each new model correcting the errors of the previous ones.</p></list-item><list-item id="S4.I1.i4">
              
              
              
            <p id="S4.I1.i4.p1">Tree Ensemble: An ensemble model using a random forest classifier, which constructs a multitude of decision trees during training.</p></list-item></list></p><p id="S4.SS2.p4">These models represent a range of established and effective techniques for text classification and serve as robust comparisons for the URL path-based DistilBERT approach. The parameters used to train each model are included in the replication package.</p></sec><sec id="S4.SS3">
        
        
        
        
      <title>Section-to-Topic Mapping Procedure</title><p id="S4.SS3.p1">To reproduce the distant-labeling approach of León et al. (<xref rid="bib.bibx14" ref-type="bibr">2023</xref>), I created a site section to topic lookup table for the News Aggregator and RecognaSumm datasets described above (the HuffPost dataset URLs lack sections). For both datasets, the site section corresponded to the first element of the URL path. I extracted these elements, ranked them by frequency, then retained the top 250. This provided enough data to cover the long-tail of possible site sections while keeping the mapping task tractable.</p><p id="S4.SS3.p2">Then, I manually assigned each section token to one of the dataset’s high-level topics by assessing the closest fit, based on the section name and a sample of its URLs. The resulting mapping tables covered 16.8% of all URLs in the News Aggregator dataset and 61.1% in RecognaSumm. This discrepancy stems from a difference in section specificity—while common RecognaSumm sections mapped cleanly onto high-level topics (e.g., politica, economia), the most common in News Aggregator were more generic (e.g., news, article, story, content). Unmapped URLs received no pseudo-label and were therefore excluded from distant-labeler training, mirroring the procedure in León et al. (<xref rid="bib.bibx14" ref-type="bibr">2023</xref>). The mapping tables and generation code are included in the replication package.</p></sec><sec id="S4.SS4">
        
        
        
        
      <title>Evaluation Framework</title><p id="S4.SS4.p1">To comprehensively assess the proposed URL-based classification approach, I leveraged a multi-faceted evaluation framework examining predictive performance, computational efficiency, and data requirements.</p><sec id="S4.SS4.SSS1">
          
          
          
          
        <title>Performance Metrics</title><p id="S4.SS4.SSS1.p1">I evaluated all models using F1 scores to account for potential class imbalances in the datasets. For each model, I computed both overall F1 scores and per-topic F1 scores to identify performance variations across different topics.</p><p id="S4.SS4.SSS1.p2">To investigate potential biases from unequal domain representation in the training data, I also conducted domain-level analyses. For each domain in the test set, I calculated model F1 scores and examined their relationship to domain frequency using Pearson correlation. To see if the model was memorizing domain-topic associations, I computed the accuracy lift over a naive majority-topic baseline (i.e., the accuracy rate that would be achieved by always predicting each domain’s modal topic) for each domain and correlated this with the domain’s topic entropy.</p></sec><sec id="S4.SS4.SSS2">
          
          
          
          
        <title>Computational Efficiency Evaluation</title><p id="S4.SS4.SSS2.p1">Given the practical compute constraints researchers face (León et al., <xref rid="bib.bibx14" ref-type="bibr">2023</xref>), I measured the computational efficiency of each approach through throughput analysis. I defined throughput as predictions per second on the test set, providing a standardized measure of inference speed across models. All models were evaluated on identical hardware: a server equipped with 8 Intel Xeon 6338 CPU cores and an NVIDIA A100 GPU. For each model type, I used the input features that achieved the highest F1 score on the test set, ensuring I compared the models at their optimal configurations.</p></sec><sec id="S4.SS4.SSS3">
          
          
          
          
        <title>Training Data Requirements Analysis</title><p id="S4.SS4.SSS3.p1">To understand the relationship between training data volume and classification performance, I conducted systematic ablation studies. I created stratified samples of 1,000 and 3,000 training records from each dataset, maintaining the original distribution of topic labels. These sample sizes were chosen to represent scenarios with limited annotation resources—depending on the dataset, they comprised between 1% and 4% of the full training data.</p><p id="S4.SS4.SSS3.p2">For each sample size, I trained new DistilBERT classifiers using identical hyperparameters to the full models, focusing on raw URL path as the input feature. This allowed me to identify the minimum data requirements for achieving competitive performance relative to both the full DistilBERT models and the traditional machine learning baselines.</p></sec></sec><sec id="S4.SS5">
        
        
        
        
      <title>URL Structure Analysis</title><p id="S4.SS5.p1">To investigate whether the effectiveness of URL-based classification depends on specific URL formats or generalizes across diverse structures, I conducted a systematic analysis of URL composition and its relationship to model performance.</p><sec id="S4.SS5.SSS1">
          
          
          
          
        <title>URL Feature Extraction</title><p id="S4.SS5.SSS1.p1">I programmatically analyzed all URLs in the News Aggregator dataset (<inline-formula><mml:math id="S4.SS5.SSS1.p1.m1" alttext="n=421,890" display="inline"><mml:mrow><mml:mi mathcolor="#000000">n</mml:mi><mml:mo mathcolor="#000000">=</mml:mo><mml:mrow><mml:mn mathcolor="#000000">421</mml:mn><mml:mo mathcolor="#000000">,</mml:mo><mml:mn mathcolor="#000000">890</mml:mn></mml:mrow></mml:mrow></mml:math></inline-formula>), extracting and parsing each URL path to identify key structural features. For each URL, I computed:</p><p><list list-type="bullet" id="S4.I2"><list-item id="S4.I2.i1">
                
                
                
              <p id="S4.I2.i1.p1"><bold>Path length:</bold> Total character count in the URL path</p></list-item><list-item id="S4.I2.i2">
                
                
                
              <p id="S4.I2.i2.p1"><bold>Segment count:</bold> Number of path components separated by forward slashes</p></list-item><list-item id="S4.I2.i3">
                
                
                
              <p id="S4.I2.i3.p1"><bold>Numeric density:</bold> Percentage of numeric characters in the path</p></list-item><list-item id="S4.I2.i4">
                
                
                
              <p id="S4.I2.i4.p1"><bold>Date indicators:</bold> Presence of recognizable date patterns (e.g., YYYY/MM/DD, YYYY-MM-DD)</p></list-item><list-item id="S4.I2.i5">
                
                
                
              <p id="S4.I2.i5.p1"><bold>Semantic keywords:</bold> Presence of topic-indicative terms (e.g., “politics,” “business,” “health”) or structural markers (e.g., “article,” “story,” “news”)</p></list-item></list></p></sec><sec id="S4.SS5.SSS2">
          
          
          
          
        <title>URL Categorization Schema</title><p id="S4.SS5.SSS2.p1">Based on these extracted features, I developed a classification schema to categorize each URL into one of six mutually exclusive structural types:</p><p><list list-type="bullet" id="S4.I3"><list-item id="S4.I3.i1">
                
                
                
              <p id="S4.I3.i1.p1"><bold>Semantic Structured:</bold> URLs containing topic keywords within a clear hierarchical structure (e.g., /politics/article/biden-announcement-2024)</p></list-item><list-item id="S4.I3.i2">
                
                
                
              <p id="S4.I3.i2.p1"><bold>Semantic Unstructured:</bold> URLs with topic keywords but lacking clear hierarchical organization (e.g., /biden-politics-announcement.html)</p></list-item><list-item id="S4.I3.i3">
                
                
                
              <p id="S4.I3.i3.p1"><bold>Date-Based:</bold> URLs where the primary organizational structure uses date formatting (e.g., /2024/03/15/)</p></list-item><list-item id="S4.I3.i4">
                
                
                
              <p id="S4.I3.i4.p1"><bold>ID-Based:</bold> URLs composed primarily (<inline-formula><mml:math id="S4.I3.i4.p1.m1" alttext="&gt;50\%" display="inline"><mml:mrow><mml:mi /><mml:mo mathcolor="#000000">&gt;</mml:mo><mml:mrow><mml:mn mathcolor="#000000">50</mml:mn><mml:mo mathcolor="#000000">%</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>) of numeric or alphanumeric identifiers without semantic keywords (e.g., /article/837g4qw9f2)</p></list-item><list-item id="S4.I3.i5">
                
                
                
              <p id="S4.I3.i5.p1"><bold>Simple:</bold> Short URLs containing two or fewer path segments</p></list-item><list-item id="S4.I3.i6">
                
                
                
              <p id="S4.I3.i6.p1"><bold>Mixed:</bold> URLs not fitting cleanly into the above categories, typically combining multiple organizational schemes</p></list-item></list></p></sec><sec id="S4.SS5.SSS3">
          
          
          
          
        <title>Performance Analysis by Structure Type</title><p id="S4.SS5.SSS3.p1">To assess how URL structure impacts classification accuracy, I merged the structural categorizations with the domain-level performance results from the raw URL path DistilBERT classifier. This allowed me to calculate aggregate F1 scores for each URL type and examine correlations between specific structural characteristics (e.g., path length, keyword presence) and model performance. I used Pearson correlation to quantify the relationship between continuous structural features and F1 scores.</p></sec></sec></sec>
    <sec id="S5">
      
      
      
      
    <title>Results</title><p id="S5.p1">Table <xref rid="S5.T2" ref-type="table">2</xref> presents F1 scores across all seven input conditions described in Section 3.1. The URL path (raw) condition represents the core contribution—classification using only URL paths—while other conditions provide comparative baselines (article text), alternative URL-based approaches, and upper bounds (blended features). These metrics demonstrate two key high-level findings. First, the fine-tuned DistilBERT classifier achieves the highest topic classification performance for every dataset. This holds true for every input feature, emphasizing the general utility of DistilBERT as a classifier, regardless of available text. Second, URLs offer strong topic classification performance—in all cases, the raw URL path-based DistilBERT classifier outperforms every non-DistilBERT model, confirming H1. In addition, the URL-based DistilBERT models trail their highest-performing counterparts by an average F1 of only 0.04 across datasets. And those highest-performing models benefit from the inclusion of URL text, as well. When comparing DistilBERT models using only titles and subtitles, versus those that also leverage URLs, the latter gain an average F1 of 0.04 across datasets.</p><p id="S5.p2">The results from the URL-based models also help to mitigate concerns around domain-level bias. While it was possible that models would glean associations between domain names themselves and their most common topic labels, in practice, there is virtually no change in performance when domain names are added. This indicates that the most predictive text lies in the URL paths, gleaned from section and article information.</p><p id="S5.p3">Looking at the other models, there is not a consistent second-best approach. However, the logistic regression, XGBoost, and SVM approaches all offer comparable performance across datasets. The distant labeling algorithm consistently has the lowest performance. In particular, the distant labeling pipeline struggles with the RecognaSumm dataset, with a maximum F1 score of just 0.25. Digging into the section-to-label mapping more closely, this performance drop appears to stem from an ambiguous relationship between site sections and topics. For example, “Mundo” is one of the most common topics in the dataset, containing 11,000 URLs. There is a corresponding “mundo” site section, attributed to this label in the manual mapping step. However, only 15% of URLs with this section are actually labeled with the corresponding category—rather, many articles fall under topics including “Internacional” (71%), “Saúde” (9%), and “Ciência e Tecnologia” (0.1%). Because there is not a clear one-to-one mapping in this case, the manual labeling process works against accurate topic classification—as evidenced by the much higher performance on this dataset from other traditional machine learning methods without a manual labeling step.</p><p id="S5.p4">It’s worth noting that the section attribution step, which is key to the distant labeling algorithm, could not be applied for the HuffPost dataset. The URLs in this dataset do not have section labels, so the reported performance solely relies on the count vectorization and Naive Bayes classification steps of the algorithm. In addition, the article snippets in the HuffPost dataset were too long for the context window of the embedding model used with the XGBoost classifier, so no result is reported for that condition.</p><table-wrap id="S5.T2"><label>Table 2:</label><caption><title>F1 scores for all combinations of models and input features across datasets. The URL Path (raw)* column represents the primary contribution of this work (URL-only classification). Bold values indicate the highest-performing model for each feature–dataset combination.</title></caption>
        
        
        
        
      <table-wrap-foot><p>!</p></table-wrap-foot></table-wrap><sec id="S5.SS1">
        
        
        
        
      <title>Throughput</title><p id="S5.SS1.p1">Figure <xref rid="S5.F1" ref-type="fig">1</xref> shows the throughput of each approach (higher is better). Several of the traditional machine learning models—logistic regression, support vector machines, and tree ensembles—offer the highest throughput, processing between 20,000 and 50,000 records per second. These models therefore offer a clear option for cases where researchers cannot sacrifice efficiency for predictive performance, or do not have the computational resources required to run a more demanding model.</p><p id="S5.SS1.p2">DistilBERT, on the other hand, sacrifices throughput for the strongest performance, processing 200-600 records per second. Because it generates compute-intensive embeddings before running inference, the XGBoost approach has the lowest throughput.</p><fig id="S5.F1"><label>Figure 1:</label><caption><title>Prediction throughput (predictions per second) for each model. The logistic regression, SVM, and tree ensemble approaches offer the highest throughput, with comparable predictive performance, while DistilBERT trades off throughput for leading performance.</title></caption>
          
          
          
          
        <graphic xlink:href="throughput.png" /></fig></sec><sec id="S5.SS2">
        
        
        
        
      <title>Training Data Volume Ablation</title><p id="S5.SS2.p1">Figure <xref rid="S5.F2" ref-type="fig">2</xref> shows how training data volume affects DistilBERT classification performance. The impact varied substantially across datasets. The HuffPost dataset showed the steepest performance degradation: F1 scores dropped from 0.80 with full training data to 0.60 with 3,000 samples and 0.18 with 1,000 samples. This sensitivity likely stems from the dataset’s 14 topic classes requiring more examples per category.</p><fig id="S5.F2"><label>Figure 2:</label><caption><title>Effect of training data ablation on URL-based DistilBERT classifier performance. Performance takes the largest hit for the dataset with the most topic labels.</title></caption>
          
          
          
          
        <graphic xlink:href="data_ablation.png" /></fig><p id="S5.SS2.p2">The News Aggregator and RecognaSumm datasets proved more robust to reduced training data. With 3,000 training samples, F1 scores decreased by an average of only 0.09 points. Even with just 1,000 samples, these models maintained F1 scores within 0.20 points of their full-data counterparts. Notably, the 3,000-sample models achieved performance comparable to traditional machine learning approaches trained on complete datasets, suggesting DistilBERT can be viable even with limited labeled data.</p></sec><sec id="S5.SS3">
        
        
        
        
      <title>Topic and Domain Performance Variance</title><p id="S5.SS3.p1">Table <xref rid="A1.T3" ref-type="table">A1</xref> in Appendix A reveals substantial heterogeneity in topic-level classification performance. The HuffPost dataset exhibited the widest performance range, with F1 scores spanning from 0.65 (Black Voices) to 0.89 (Politics) for the raw URL path model. Political content and lifestyle topics (Style &amp; Beauty: 0.87, Wellness: 0.87) showed the strongest URL-based discriminability. Identity-focused categories (Black Voices: 0.65, Queer Voices: 0.74) proved more challenging, suggesting these topics may appear across more diverse URL structures.</p><p id="S5.SS3.p2">In contrast, the News Aggregator and RecognaSumm datasets showed remarkably consistent performance across topics, with F1 ranges of only 0.06 points each.</p><p id="S5.SS3.p3">Domain-level analysis revealed no meaningful relationship between domain representation and performance (Pearson <inline-formula><mml:math id="S5.SS3.p3.m1" alttext="r=0.02" display="inline"><mml:mrow><mml:mi mathcolor="#000000">r</mml:mi><mml:mo mathcolor="#000000">=</mml:mo><mml:mn mathcolor="#000000">0.02</mml:mn></mml:mrow></mml:math></inline-formula>), indicating that aggregate results were not inflated by a few heavily represented sources. The accuracy lift analysis showed that 35.1% of domains gained at least 5 percentage points over the naive baseline, while only 8.9% experienced a comparable drop. Domains with higher topic diversity showed greater model improvement (<inline-formula><mml:math id="S5.SS3.p3.m2" alttext="r=0.63" display="inline"><mml:mrow><mml:mi mathcolor="#000000">r</mml:mi><mml:mo mathcolor="#000000">=</mml:mo><mml:mn mathcolor="#000000">0.63</mml:mn></mml:mrow></mml:math></inline-formula> between topic entropy and accuracy lift), confirming the classifier leverages URL content rather than memorizing domain-topic associations.</p></sec><sec id="S5.SS4">
        
        
        
        
      <title>URL Structure</title><p id="S5.SS4.p1">The URL structure analysis revealed both diversity and consistent patterns across the News Aggregator dataset. Over half (53.6%) of URLs contained explicit semantic keywords, with “semantic unstructured” (27.6%) and “semantic structured” (25.9%) being the most common types. Pure ID-based URLs were rare, comprising only 1.1% of the dataset.</p><p id="S5.SS4.p2">Classification performance varied meaningfully by URL type. Date-based URLs achieved the highest F1 score (0.93), followed by semantic structured (0.87) and semantic unstructured (0.84) URLs. The model could not accurately classify ID-based URLs (<inline-formula><mml:math id="S5.SS4.p2.m1" alttext="F1=0.41" display="inline"><mml:mrow><mml:mrow><mml:mi mathcolor="#000000">F</mml:mi><mml:mo>⁢</mml:mo><mml:mn mathcolor="#000000">1</mml:mn></mml:mrow><mml:mo mathcolor="#000000">=</mml:mo><mml:mn mathcolor="#000000">0.41</mml:mn></mml:mrow></mml:math></inline-formula>). URL path length showed a moderate positive correlation with performance (<inline-formula><mml:math id="S5.SS4.p2.m2" alttext="r=0.32" display="inline"><mml:mrow><mml:mi mathcolor="#000000">r</mml:mi><mml:mo mathcolor="#000000">=</mml:mo><mml:mn mathcolor="#000000">0.32</mml:mn></mml:mrow></mml:math></inline-formula>), suggesting longer, more descriptive paths provide richer classification signals.
To verify that the strong performance of date-based URLs stems from semantic content rather than spurious temporal patterns, I conducted an ablation study removing all date components from URL paths. Across all three datasets, logistic regression models showed minimal performance change when dates were removed, with an average F1 decrease of only 0.008 (see Appendix B).
These findings demonstrate that while URL structures vary considerably, the vast majority contain meaningful semantic information that the classifier successfully exploits. Performance degradation occurs primarily in the small fraction of opaque, ID-based URLs, validating the general applicability of URL-based classification for news articles when semantically meaningful text is present.</p></sec></sec>
    <sec id="S6">
      
      
      
      
    <title>Discussion</title><p id="S6.p1">This research demonstrates the potential for URL-based topic classification of news articles. Across three commonly-used benchmark datasets, a BERT-based classifier trained solely on the paths of article URLs more accurately assigns news articles to topics than other machine learning approaches that leverage vectorization of article text. In addition, URLs offer a performance gain for BERT-based topic classification compared to article text, whether in isolation or in combination with that text. This suggests that the extremely compact text contained in the path of a news article URL may offer a stronger signal than the text of the article itself for topic classification tasks.</p><p id="S6.p2">These findings highlight a valuable approach for news article topic classification: URLs alone, when analyzed with sophisticated models, can provide highly accurate classification while requiring minimal data. This is particularly relevant given emerging challenges in accessing news content. While URL-based classification has long been necessary in domains where accessing web pages is impossible (Baykan et al., <xref rid="bib.bibx3" ref-type="bibr">2011</xref>; Ma et al., <xref rid="bib.bibx34" ref-type="bibr">2009</xref>), news researchers now face similar constraints. Recent increases in robots.txt restrictions, driven by publishers’ responses to AI scraping, combined with the growing adoption of paywalls, are making it increasingly difficult to collect large-scale news text data (Longpre et al., <xref rid="bib.bibx33" ref-type="bibr">2024</xref>). My results demonstrate that effective classification remains possible in these cases, requiring only article URLs rather than article text access.</p><p id="S6.p3">A key consideration for this supervised learning approach is the need for labeled training data, which researchers can procure in several ways. First, the data ablation study showed that competitive performance is possible with just a few thousand training examples, making manual annotation on URLs or articles a feasible option. Second, because the model’s performance is not dependent on any particular domain within a diverse dataset, a model trained on a wide-ranging corpus may generalize to new data if the URL structures are comparable. Finally, for cases where manual annotation is impractical, researchers could explore emerging techniques that use large language models (LLMs) for automated labeling (Zhang et al., <xref rid="bib.bibx49" ref-type="bibr">2025</xref>; Kuzman &amp; Ljubešić, <xref rid="bib.bibx31" ref-type="bibr">2025</xref>).</p><p id="S6.p4">This research also illustrates an important methodological trade-off in computational communication research: data efficiency versus compute efficiency. Many existing approaches prioritize compute efficiency. For instance, León et al. (<xref rid="bib.bibx14" ref-type="bibr">2023</xref>) deliberately designed their distant labeling approach to be computationally lightweight, making it accessible to researchers with limited computational resources. However, there are increasingly common scenarios where data—rather than compute—is the limiting factor. Researchers may face restricted access to news content, limited availability of digitized historical records, or technical challenges in parsing certain types of information at scale. In these cases, traditional approaches requiring large training datasets may be impractical. My findings demonstrate an alternative: Using more sophisticated model architectures, especially those pretrained on large text corpora, can achieve strong performance even with limited domain-specific data. This approach enables researchers to maximize the value of scarce data, making previously infeasible large-scale annotation and analysis tasks possible.</p><p id="S6.p5">These findings point to an important direction for computational communication research: exploring the balance between data and compute efficiency. While leading general-purpose models like the GPT series demand both massive datasets and intensive computing resources (Brown et al., <xref rid="bib.bibx5" ref-type="bibr">2020</xref>), more focused applications may offer practical alternatives. Sophisticated models applied to specific research tasks can achieve strong performance with smaller datasets and less manual annotation. This data-efficient approach complements existing work on compute efficiency, with each strategy addressing different barriers to research. Just as compute-efficient methods make computational analysis more accessible to researchers with limited processing power, data-efficient approaches can enable research that would otherwise be blocked by the costs of web scraping, data licensing, or human annotation. Together, these approaches can democratize computational methods across a broader range of research contexts.</p><p id="S6.p6">There are several important limitations to this work. First, the methodology described here requires meaningful information encoded in the URLs used for training. As demonstrated in the URL structure analysis in Section 5.4, longer URLs with more semantic information generally yield better results. This method does not perform well in cases where, as on some news websites, the URL path is a random sequence of letters and numbers. Researchers should therefore consider whether the URLs in their data encode meaningful information to map to topic labels before using this method. In addition, while the benchmark datasets used to evaluate the approaches in this study contain a wide range of news websites and multiple languages, they are not comprehensive. The results here may not transfer to any particular news website, label set, or language. Finally, the model used for URL-based topic classification here is based on BERT, which was trained on a corpus of books and Wikipedia data (Devlin et al., <xref rid="bib.bibx15" ref-type="bibr">2019</xref>). While fine-tuning pretrained models can yield strong performance on specific classification tasks, the model’s initial pretraining fundamentally shapes its behavior. When research priorities include model interpretability or precise control over model behavior, training custom models from scratch may be more appropriate.</p><p id="S6.p7">Looking ahead, this work opens several promising avenues for future research in computational communication studies. The demonstrated effectiveness of URL-based topic classification suggests opportunities to analyze news content in contexts where article text access is limited or impossible. Researchers might explore applying these methods to historical news archives where only URLs or headlines survived, or to contemporary paywalled content that cannot be scraped at scale. Future work should also investigate how URL-based topic classification performs across different languages, cultures, and digital media contexts, particularly as URL conventions evolve. More broadly, this research highlights the need to critically examine assumptions about what constitutes “rich” versus “sparse” data for machine learning applications in communication research. While the field has often prioritized collecting comprehensive text corpora, there may be many cases where strategic use of limited but information-dense features, combined with sophisticated modeling approaches, could yield comparable or superior results with fewer resources. Understanding these tradeoffs between data requirements, computational costs, and model performance will be crucial as computational methods continue to expand within communication research.</p></sec>
    </body>
  <back>
    <fn-group><title>Notes</title><fn id="id1" symbol="1"><p id="footnote1">
              
              
              
              
            All code and data available at <ext-link xlink:href="https://osf.io/qfrzh/overview" ext-link-type="uri">https://osf.io/qfrzh/overview</ext-link></p></fn></fn-group><ref-list><title>References</title>
      <ref id="bib.bibx1"><mixed-citation publication-type="journal"><string-name><surname>Bakshy</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Messing</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Adamic</surname>, <given-names>L. A.</given-names></string-name> (<year>2015</year>). <article-title>Exposure to ideologically diverse news and opinion on Facebook</article-title>. <source><italic>Science</italic></source>, <volume>348</volume>(<issue>6239</issue>), <fpage>1130</fpage>–<lpage>1132</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1126/science.aaa1160">https://doi.org/10.1126/science.aaa1160</ext-link></mixed-citation></ref>
      <ref id="bib.bibx2"><mixed-citation publication-type="journal"><string-name><surname>Baykan</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Henzinger</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Marian</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Weber</surname>, <given-names>I.</given-names></string-name> (<year>2009</year>). <article-title>Purely URL-based topic classification</article-title>. <source><italic>Proceedings of the 18th international conference on World wide web</italic></source>, <fpage>1109</fpage>–<lpage>1110</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/1526709.1526880">https://doi.org/10.1145/1526709.1526880</ext-link></mixed-citation></ref>
      <ref id="bib.bibx3"><mixed-citation publication-type="journal"><string-name><surname>Baykan</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Henzinger</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Marian</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Weber</surname>, <given-names>I.</given-names></string-name> (<year>2011</year>). <article-title>A Comprehensive Study of Features and Algorithms for URL-Based Topic Classification</article-title>. <source><italic>ACM Transactions on the Web</italic></source>, <volume>5</volume>(<issue>3</issue>), <fpage>1</fpage>–<lpage>29</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/1993053.1993057">https://doi.org/10.1145/1993053.1993057</ext-link></mixed-citation></ref>
      <ref id="bib.bibx4"><mixed-citation publication-type="journal"><string-name><surname>Breiman</surname>, <given-names>L.</given-names></string-name> (<year>2001</year>). <article-title>Random Forests</article-title>. <source><italic>Machine Learning</italic></source>, <volume>45</volume>, <fpage>5</fpage>–<lpage>32</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1023/A:1010933404324">https://doi.org/10.1023/A:1010933404324</ext-link></mixed-citation></ref>
      <ref id="bib.bibx5"><mixed-citation publication-type="book"><string-name><surname>Brown</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Mann</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Ryder</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Subbiah</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Kaplan</surname>, <given-names>J. D.</given-names></string-name>, <string-name><surname>Dhariwal</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Neelakantan</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Shyam</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Sastry</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Askell</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Agarwal</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Herbert-Voss</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Krueger</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Henighan</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Child</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Ramesh</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Ziegler</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Wu</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Winter</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Hesse</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Sigler</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Litwin</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Gray</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Chess</surname>, <given-names>B.</given-names></string-name>, <string-name><surname>Clark</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Berner</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>McCandlish</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Radford</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Sutskever</surname>, <given-names>I.</given-names></string-name>, &amp; <string-name><surname>Amodei</surname>, <given-names>D.</given-names></string-name> (<year>2020</year>). <chapter-title>Language Models are Few-Shot Learners</chapter-title>. In <source><italic>Advances in Neural Information Processing Systems</italic></source> (pp. <fpage>1877</fpage>–<lpage>1901</lpage>). <publisher-name>Curran Associates, Inc.</publisher-name>. <ext-link ext-link-type="uri" xlink:href="https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf">https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf</ext-link></mixed-citation></ref>
      <ref id="bib.bibx6"><mixed-citation publication-type="journal"><string-name><surname>Budak</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Goel</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Rao</surname>, <given-names>J. M.</given-names></string-name> (<year>2016</year>). <article-title>Fair and Balanced? Quantifying Media Bias through Crowdsourced Content Analysis</article-title>. <source><italic>Public Opinion Quarterly</italic></source>, <volume>80</volume>(<issue>S1</issue>), <fpage>250</fpage>–<lpage>271</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx7"><mixed-citation publication-type="journal"><string-name><surname>Burggraaff</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Trilling</surname>, <given-names>D.</given-names></string-name> (<year>2020</year>). <article-title>Through a different gate: An automated content analysis of how online news and print news differ</article-title>. <source><italic>Journalism</italic></source>, <volume>21</volume>(<issue>1</issue>), <fpage>112</fpage>–<lpage>129</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/1464884917716699">https://doi.org/10.1177/1464884917716699</ext-link></mixed-citation></ref>
      <ref id="bib.bibx8"><mixed-citation publication-type="journal"><string-name><surname>Chang</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Du</surname>, <given-names>F.</given-names></string-name>, &amp; <string-name><surname>Wang</surname>, <given-names>Y.</given-names></string-name> (<year>2021</year>). <article-title>Research on Malicious URL Detection Technology Based on BERT Model</article-title>. <source><italic>2021 IEEE 9th International Conference on Information, Communication and Networks (ICICN)</italic></source>, <fpage>340</fpage>–<lpage>345</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ICICN52636.2021.9673860">https://doi.org/10.1109/ICICN52636.2021.9673860</ext-link></mixed-citation></ref>
      <ref id="bib.bibx9"><mixed-citation publication-type="journal"><string-name><surname>Chuang</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Larochelle</surname>, <given-names>D.</given-names></string-name> (<year>2014</year>). <article-title>Large-Scale Topical Analysis of Multiple Online News Sources with Media Cloud</article-title>. <source><italic>NewsKDD: Data Science for News Publishing</italic></source>, </mixed-citation></ref>
      <ref id="bib.bibx10"><mixed-citation publication-type="journal"><string-name><surname>De Clercq</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>de Bruyne</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Hoste</surname>, <given-names>V.</given-names></string-name> (<year>2020</year>). <article-title>News Topic Classiﬁcation as a First Step Towards Diverse News Recommendation</article-title>. <source><italic>Computational Linguistics in the Netherlands Journal</italic></source>, <volume>10</volume>, <fpage>37</fpage>–<lpage>55</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx11"><mixed-citation publication-type="journal"><string-name><surname>de León</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Trilling</surname>, <given-names>D.</given-names></string-name> (<year>2021</year>). <article-title>A Sadness Bias in Political News Sharing? The Role of Discrete Emotions in the Engagement and Dissemination of Political News on Facebook</article-title>. <source><italic>Social Media + Society</italic></source>, <volume>7</volume>(<issue>4</issue>), <fpage>205630512110597</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/20563051211059710">https://doi.org/10.1177/20563051211059710</ext-link></mixed-citation></ref>
      <ref id="bib.bibx12"><mixed-citation publication-type="journal"><string-name><surname>de León</surname>, <given-names>E.</given-names></string-name>, &amp; <string-name><surname>Vermeer</surname>, <given-names>S.</given-names></string-name> (<year>2023</year>). <article-title>The News Sharing Gap: Divergence in Online Political News Publication and Dissemination Patterns across Elections and Countries</article-title>. <source><italic>Digital Journalism</italic></source>, <volume>11</volume>(<issue>2</issue>), <fpage>343</fpage>–<lpage>362</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/21670811.2022.2099920">https://doi.org/10.1080/21670811.2022.2099920</ext-link></mixed-citation></ref>
      <ref id="bib.bibx13"><mixed-citation publication-type="journal"><string-name><surname>de León</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Vermeer</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Trilling</surname>, <given-names>D.</given-names></string-name> (<year>2023</year>). <article-title>Electoral news sharing: a study of changes in news coverage and Facebook sharing behaviour during the 2018 Mexican elections</article-title>. <source><italic>Information, Communication &amp; Society</italic></source>, <volume>26</volume>(<issue>6</issue>), <fpage>1193</fpage>–<lpage>1209</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/1369118X.2021.1994629">https://doi.org/10.1080/1369118X.2021.1994629</ext-link></mixed-citation></ref>
      <ref id="bib.bibx14"><mixed-citation publication-type="journal"><string-name><surname>de León</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Vermeer</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Trilling</surname>, <given-names>D.</given-names></string-name> (<year>2023</year>). <article-title>URLs Can Facilitate Machine Learning Classification of News Stories Across Languages and Contexts</article-title>. <source><italic>Computational Communication Research</italic></source>, <volume>5</volume>(<issue>2</issue>), <fpage>1</fpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.5117/CCR2023.2.4.DELE">https://doi.org/10.5117/CCR2023.2.4.DELE</ext-link></mixed-citation></ref>
      <ref id="bib.bibx15"><mixed-citation publication-type="journal"><string-name><surname>Devlin</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Chang</surname>, <given-names>MW.</given-names></string-name>, <string-name><surname>Lee</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Toutanova</surname>, <given-names>K.</given-names></string-name> (<year>2019</year>). <article-title>BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</article-title>. <source><italic>Proceedings of NAACL-HLT 2019</italic></source>, <fpage>4171</fpage>–<lpage>4186</lpage>. Retrieved <month>March</month> <day>10</day>, <year>2020</year>, from  <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/1810.04805">http://arxiv.org/abs/1810.04805</ext-link></mixed-citation></ref>
      <ref id="bib.bibx16"><mixed-citation publication-type="journal"><string-name><surname>Flaxman</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Goel</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Rao</surname>, <given-names>J. M.</given-names></string-name> (<year>2016</year>). <article-title>Filter Bubbles, Echo Chambers, and Online News Consumption</article-title>. <source><italic>Public Opinion Quarterly</italic></source>, <volume>80</volume>(<issue>S1</issue>), <fpage>298</fpage>–<lpage>320</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1093/poq/nfw006">https://doi.org/10.1093/poq/nfw006</ext-link></mixed-citation></ref>
      <ref id="bib.bibx17"><mixed-citation publication-type="journal"><string-name><surname>Freelon</surname>, <given-names>D.</given-names></string-name> (<year>2018</year>). <article-title>Computational Research in the Post-API Age</article-title>. <source><italic>Political Communication</italic></source>, <volume>35</volume>(<issue>4</issue>), <fpage>665</fpage>–<lpage>668</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/10584609.2018.1477506">https://doi.org/10.1080/10584609.2018.1477506</ext-link></mixed-citation></ref>
      <ref id="bib.bibx18"><mixed-citation publication-type="book"><string-name><surname>Gasparetti</surname>, <given-names>F.</given-names></string-name> (<year>2017</year>). <source><italic>News Aggregator</italic></source>. </mixed-citation></ref>
      <ref id="bib.bibx19"><mixed-citation publication-type="book"><string-name><surname>Gouvert</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>Hunter</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Louradour</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Cerisara</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Dufraisse</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Sy</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Rivière</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Lorré</surname>, <given-names>JP.</given-names></string-name>, &amp; <string-name><surname>community</surname>, <given-names>OF.</given-names></string-name> (<year>2025</year>). <source><italic>The Lucie-7B LLM and the Lucie Training Dataset: Open resources for multilingual language generation</italic></source>. <publisher-name>arXiv</publisher-name>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.48550/arXiv.2503.12294">https://doi.org/10.48550/arXiv.2503.12294</ext-link></mixed-citation></ref>
      <ref id="bib.bibx20"><mixed-citation publication-type="journal"><string-name><surname>Guess</surname>, <given-names>A. M.</given-names></string-name> (<year>2021</year>). <article-title>(Almost) Everything in Moderation: New Evidence on Americans' Online Media Diets</article-title>. <source><italic>American Journal of Political Science</italic></source>, <volume>65</volume>(<issue>4</issue>), <fpage>1007</fpage>–<lpage>1022</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1111/ajps.12589">https://doi.org/10.1111/ajps.12589</ext-link></mixed-citation></ref>
      <ref id="bib.bibx21"><mixed-citation publication-type="journal"><string-name><surname>Hagar</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Diakopoulos</surname>, <given-names>N.</given-names></string-name>, &amp; <string-name><surname>DeWilde</surname>, <given-names>B.</given-names></string-name> (<year>2021</year>). <article-title>Anticipating Attention: On the Predictability of News Headline Tests</article-title>. <source><italic>Digital Journalism</italic></source>,  <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/21670811.2021.1984266">https://doi.org/10.1080/21670811.2021.1984266</ext-link></mixed-citation></ref>
      <ref id="bib.bibx22"><mixed-citation publication-type="journal"><string-name><surname>Hernández</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Rivero</surname>, <given-names>C. R.</given-names></string-name>, <string-name><surname>Ruiz</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Corchuelo</surname>, <given-names>R.</given-names></string-name> (<year>2012</year>). <article-title>A statistical approach to URL-based web page clustering</article-title>. <source><italic>Proceedings of the 21st International Conference on World Wide Web</italic></source>, <fpage>525</fpage>–<lpage>526</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/2187980.2188109">https://doi.org/10.1145/2187980.2188109</ext-link></mixed-citation></ref>
      <ref id="bib.bibx23"><mixed-citation publication-type="journal"><string-name><surname>Hernández</surname>, <given-names>I.</given-names></string-name>, <string-name><surname>Rivero</surname>, <given-names>C. R.</given-names></string-name>, <string-name><surname>Ruiz</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Corchuelo</surname>, <given-names>R.</given-names></string-name> (<year>2014</year>). <article-title>CALA: An unsupervised URL-based web page classification system</article-title>. <source><italic>Knowledge-Based Systems</italic></source>, <volume>57</volume>, <fpage>168</fpage>–<lpage>180</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1016/j.knosys.2013.12.019">https://doi.org/10.1016/j.knosys.2013.12.019</ext-link></mixed-citation></ref>
      <ref id="bib.bibx24"><mixed-citation publication-type="journal"><string-name><surname>Hutto</surname>, <given-names>C. J.</given-names></string-name>, &amp; <string-name><surname>Gilbert</surname>, <given-names>E.</given-names></string-name> (<year>2014</year>). <article-title>VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text</article-title>. <source><italic>Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media</italic></source>, </mixed-citation></ref>
      <ref id="bib.bibx25"><mixed-citation publication-type="journal"><string-name><surname>Jahnavi</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Chandana</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Nair</surname>, <given-names>P. C.</given-names></string-name>, &amp; <string-name><surname>Dheeraj</surname>, <given-names>K.</given-names></string-name> (<year>2024</year>). <article-title>Classification of News Category Using Contextual Features</article-title>. <source><italic>2024 International Conference on Knowledge Engineering and Communication Systems (ICKECS)</italic></source>, <volume>1</volume>, <fpage>1</fpage>–<lpage>7</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ICKECS61492.2024.10616859">https://doi.org/10.1109/ICKECS61492.2024.10616859</ext-link></mixed-citation></ref>
      <ref id="bib.bibx26"><mixed-citation publication-type="journal"><string-name><surname>Kaiser</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Rauchfleisch</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Bourassa</surname>, <given-names>N.</given-names></string-name> (<year>2019</year>). <article-title>Connecting the (Far-)Right Dots: A Topic Modeling and Hyperlink Analysis of (Far-)Right Media Coverage during the US Elections 2016</article-title>. <source><italic>Digital Journalism</italic></source>, <volume>8</volume>(<issue>3</issue>), <fpage>422</fpage>–<lpage>441</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/21670811.2019.1682629">https://doi.org/10.1080/21670811.2019.1682629</ext-link></mixed-citation></ref>
      <ref id="bib.bibx27"><mixed-citation publication-type="journal"><string-name><surname>Kan</surname>, <given-names>MY.</given-names></string-name> (<year>2004</year>). <article-title>Web Page Categorization without the Web Page</article-title>. <source><italic>Proceedings of the 13th International World Wide Web Conference on Alternate Track Papers &amp; Posters</italic></source>, <fpage>262</fpage>–<lpage>263</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx28"><mixed-citation publication-type="journal"><string-name><surname>Kan</surname>, <given-names>MY.</given-names></string-name>, &amp; <string-name><surname>Thi</surname>, <given-names>H. O. N.</given-names></string-name> (<year>2005</year>). <article-title>Fast webpage classification using URL features</article-title>. <source><italic>Proceedings of the 14th ACM international conference on Information and knowledge management</italic></source>, <fpage>325</fpage>–<lpage>326</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/1099554.1099649">https://doi.org/10.1145/1099554.1099649</ext-link></mixed-citation></ref>
      <ref id="bib.bibx29"><mixed-citation publication-type="journal"><string-name><surname>Korenčić</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Ristov</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Šnajder</surname>, <given-names>J.</given-names></string-name> (<year>2015</year>). <article-title>Getting the Agenda Right: Measuring Media Agenda using Topic Models</article-title>. <source><italic>Proceedings of the 2015 Workshop on Topic Models: Post-Processing and Applications</italic></source>, <fpage>61</fpage>–<lpage>66</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx30"><mixed-citation publication-type="journal"><string-name><surname>Kuiken</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Schuth</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Spitters</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Marx</surname>, <given-names>M.</given-names></string-name> (<year>2017</year>). <article-title>Effective Headlines of Newspaper Articles in a Digital Environment</article-title>. <source><italic>Digital Journalism</italic></source>, <volume>5</volume>(<issue>10</issue>), <fpage>1300</fpage>–<lpage>1314</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1080/21670811.2017.1279978">https://doi.org/10.1080/21670811.2017.1279978</ext-link></mixed-citation></ref>
      <ref id="bib.bibx31"><mixed-citation publication-type="journal"><string-name><surname>Kuzman</surname>, <given-names>T.</given-names></string-name>, &amp; <string-name><surname>Ljubešić</surname>, <given-names>N.</given-names></string-name> (<year>2025</year>). <article-title>LLM Teacher-Student Framework for Text Classification With No Manually Annotated Data: A Case Study in IPTC News Topic Classification</article-title>. <source><italic>IEEE Access</italic></source>, <volume>13</volume>, <fpage>35621</fpage>–<lpage>35633</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ACCESS.2025.3544814">https://doi.org/10.1109/ACCESS.2025.3544814</ext-link></mixed-citation></ref>
      <ref id="bib.bibx32"><mixed-citation publication-type="book"><string-name><surname>Li</surname>, <given-names>Z.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Zhang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Long</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Xie</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Zhang</surname>, <given-names>M.</given-names></string-name> (<year>2023</year>). <source><italic>Towards General Text Embeddings with Multi-stage Contrastive Learning</italic></source>. <publisher-name>arXiv</publisher-name>. Retrieved <month>October</month> <day>28</day>, <year>2024</year>, from  <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/2308.03281">http://arxiv.org/abs/2308.03281</ext-link></mixed-citation></ref>
      <ref id="bib.bibx33"><mixed-citation publication-type="journal"><string-name><surname>Longpre</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Mahari</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Chen</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Obeng-Marnu</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Sileo</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Brannon</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Muennighoff</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Khazam</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Kabbara</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Perisetla</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Wu</surname>, <given-names>X.</given-names></string-name>, <string-name><surname>Shippole</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Bollacker</surname>, <given-names>K.</given-names></string-name>, <string-name><surname>Wu</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Villa</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Pentland</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Hooker</surname>, <given-names>S.</given-names></string-name> (<year>2024</year>). <article-title>A large-scale audit of dataset licensing and attribution in AI</article-title>. <source><italic>Nature Machine Intelligence</italic></source>, <volume>6</volume>(<issue>8</issue>), <fpage>975</fpage>–<lpage>987</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1038/s42256-024-00878-8">https://doi.org/10.1038/s42256-024-00878-8</ext-link></mixed-citation></ref>
      <ref id="bib.bibx34"><mixed-citation publication-type="journal"><string-name><surname>Ma</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Saul</surname>, <given-names>L. K.</given-names></string-name>, <string-name><surname>Savage</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Voelker</surname>, <given-names>G. M.</given-names></string-name> (<year>2009</year>). <article-title>Beyond blacklists: learning to detect malicious web sites from suspicious URLs</article-title>. <source><italic>Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining</italic></source>, <fpage>1245</fpage>–<lpage>1254</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/1557019.1557153">https://doi.org/10.1145/1557019.1557153</ext-link></mixed-citation></ref>
      <ref id="bib.bibx35"><mixed-citation publication-type="book"><string-name><surname>Misra</surname>, <given-names>R.</given-names></string-name> (<year>2022</year>). <source><italic>News Category Dataset</italic></source>. <publisher-name>arXiv</publisher-name>. Retrieved <month>October</month> <day>30</day>, <year>2024</year>, from  <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/2209.11429">http://arxiv.org/abs/2209.11429</ext-link></mixed-citation></ref>
      <ref id="bib.bibx36"><mixed-citation publication-type="journal"><string-name><surname>Muennighoff</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Tazi</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Magne</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Reimers</surname>, <given-names>N.</given-names></string-name> (<year>2023</year>). <article-title>MTEB: Massive Text Embedding Benchmark</article-title>. <source><italic>Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics</italic></source>, <fpage>2014</fpage>–<lpage>2037</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.18653/v1/2023.eacl-main.148">https://doi.org/10.18653/v1/2023.eacl-main.148</ext-link></mixed-citation></ref>
      <ref id="bib.bibx37"><mixed-citation publication-type="journal"><string-name><surname>Paiola</surname>, <given-names>P. H.</given-names></string-name>, <string-name><surname>Garcia</surname>, <given-names>G. L.</given-names></string-name>, <string-name><surname>Jodas</surname>, <given-names>D. S.</given-names></string-name>, <string-name><surname>Correia</surname>, <given-names>J. V. M.</given-names></string-name>, <string-name><surname>Afonso</surname>, <given-names>L. C. S.</given-names></string-name>, &amp; <string-name><surname>Papa</surname>, <given-names>J. P.</given-names></string-name> (<year>2024</year>). <article-title>RecognaSumm: A Novel Brazilian Summarization Dataset</article-title>. <source><italic>Proceedings of the 16th International Conference on Computational Processing of Portuguese</italic></source>, <volume>1</volume>, <fpage>575</fpage>–<lpage>579</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx38"><mixed-citation publication-type="journal"><string-name><surname>Rajalakshmi</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name><surname>Aravindan</surname>, <given-names>C.</given-names></string-name> (<year>2018</year>). <article-title>A Naive Bayes approach for URL classification with supervised feature selection and rejection framework</article-title>. <source><italic>Computational Intelligence</italic></source>, <volume>34</volume>(<issue>1</issue>), <fpage>363</fpage>–<lpage>396</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1111/coin.12158">https://doi.org/10.1111/coin.12158</ext-link></mixed-citation></ref>
      <ref id="bib.bibx39"><mixed-citation publication-type="book"><string-name><surname>Sanh</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Debut</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Chaumond</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Wolf</surname>, <given-names>T.</given-names></string-name> (<year>2020</year>). <source><italic>DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter</italic></source>. <publisher-name>arXiv</publisher-name>. Retrieved <month>October</month> <day>28</day>, <year>2024</year>, from  <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/1910.01108">http://arxiv.org/abs/1910.01108</ext-link></mixed-citation></ref>
      <ref id="bib.bibx40"><mixed-citation publication-type="journal"><string-name><surname>Singh</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Sandhawalia</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Monet</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Poirier</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Coursimault</surname>, <given-names>JM.</given-names></string-name> (<year>2012</year>). <article-title>Large Scale URL-based Classification Using Online Incremental Learning</article-title>. <source><italic>2012 11th International Conference on Machine Learning and Applications</italic></source>, <fpage>402</fpage>–<lpage>409</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/ICMLA.2012.199">https://doi.org/10.1109/ICMLA.2012.199</ext-link></mixed-citation></ref>
      <ref id="bib.bibx41"><mixed-citation publication-type="journal"><string-name><surname>Singh</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Chun</surname>, <given-names>S. A.</given-names></string-name>, &amp; <string-name><surname>Atluri</surname>, <given-names>V.</given-names></string-name> (<year>2020</year>). <article-title>Developing Machine Learning Models to Automate News Classification</article-title>. <source><italic>The 21st Annual International Conference on Digital Government Research</italic></source>, <fpage>354</fpage>–<lpage>355</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/3396956.3397001">https://doi.org/10.1145/3396956.3397001</ext-link></mixed-citation></ref>
      <ref id="bib.bibx42"><mixed-citation publication-type="journal"><string-name><surname>Thelwall</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Vaughan</surname>, <given-names>L.</given-names></string-name> (<year>2004</year>). <article-title>A fair history of the Web? Examining country balance in the Internet Archive</article-title>. <source><italic>Library &amp; Information Science Research</italic></source>, <volume>26</volume>, <fpage>162</fpage>–<lpage>176</lpage>.</mixed-citation></ref>
      <ref id="bib.bibx43"><mixed-citation publication-type="journal"><string-name><surname>Trilling</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Tolochko</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Burscher</surname>, <given-names>B.</given-names></string-name> (<year>2017</year>). <article-title>From Newsworthiness to Shareworthiness: How to Predict News Sharing Based on Article Characteristics</article-title>. <source><italic>Journalism &amp; Mass Communication Quarterly</italic></source>, <volume>94</volume>(<issue>1</issue>), <fpage>38</fpage>–<lpage>60</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1177/1077699016654682">https://doi.org/10.1177/1077699016654682</ext-link></mixed-citation></ref>
      <ref id="bib.bibx44"><mixed-citation publication-type="journal"><string-name><surname>Vanhoenshoven</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Napoles</surname>, <given-names>G.</given-names></string-name>, <string-name><surname>Falcon</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Vanhoof</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Koppen</surname>, <given-names>M.</given-names></string-name> (<year>2016</year>). <article-title>Detecting malicious URLs using machine learning techniques</article-title>. <source><italic>2016 IEEE Symposium Series on Computational Intelligence (SSCI)</italic></source>, <fpage>1</fpage>–<lpage>8</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/SSCI.2016.7850079">https://doi.org/10.1109/SSCI.2016.7850079</ext-link></mixed-citation></ref>
      <ref id="bib.bibx45"><mixed-citation publication-type="journal"><string-name><surname>Vaswani</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Shazeer</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Parmar</surname>, <given-names>N.</given-names></string-name>, <string-name><surname>Uszkoreit</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Jones</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Gomez</surname>, <given-names>A. N.</given-names></string-name>, <string-name><surname>Kaiser</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Polosukhin</surname>, <given-names>I.</given-names></string-name> (<year>2017</year>). <article-title>Attention Is All You Need</article-title>. <source><italic>Proceedings of the 31st International Conference on Neural Information Processing Systems</italic></source>, <fpage>6000</fpage>–<lpage>6010</lpage>. Retrieved <month>October</month> <day>29</day>, <year>2024</year>, from  <ext-link ext-link-type="uri" xlink:href="https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf">https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf</ext-link></mixed-citation></ref>
      <ref id="bib.bibx46"><mixed-citation publication-type="journal"><string-name><surname>Welsh</surname>, <given-names>B.</given-names></string-name> (<year>2024</year>). <article-title>Who blocks OpenAI, Google AI and Common Crawl? — News Homepages documentation</article-title>. <source><italic>News Homepages</italic></source>,  Retrieved <month>October</month> <day>30</day>, <year>2024</year>, from  <ext-link ext-link-type="uri" xlink:href="https://palewi.re/docs/news-homepages/openai-gptbot-robotstxt.html">https://palewi.re/docs/news-homepages/openai-gptbot-robotstxt.html</ext-link></mixed-citation></ref>
      <ref id="bib.bibx47"><mixed-citation publication-type="book"><string-name><surname>Wolf</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Debut</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Sanh</surname>, <given-names>V.</given-names></string-name>, <string-name><surname>Chaumond</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Delangue</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Moi</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Cistac</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Rault</surname>, <given-names>T.</given-names></string-name>, <string-name><surname>Louf</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Funtowicz</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Davison</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Shleifer</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Platen</surname>, <given-names>P. v.</given-names></string-name>, <string-name><surname>Ma</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Jernite</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Plu</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Xu</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Scao</surname>, <given-names>T. L.</given-names></string-name>, <string-name><surname>Gugger</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Drame</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Lhoest</surname>, <given-names>Q.</given-names></string-name>, &amp; <string-name><surname>Rush</surname>, <given-names>A. M.</given-names></string-name> (<year>2020</year>). <source><italic>HuggingFace's Transformers: State-of-the-art Natural Language Processing</italic></source>. <publisher-name>arXiv</publisher-name>. Retrieved <month>October</month> <day>29</day>, <year>2024</year>, from  <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/1910.03771">http://arxiv.org/abs/1910.03771</ext-link></mixed-citation></ref>
      <ref id="bib.bibx48"><mixed-citation publication-type="journal"><string-name><surname>Zhang</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Qin</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Yan</surname>, <given-names>Q.</given-names></string-name> (<year>2006</year>). <article-title>The Role of URLs in Objectionable Web Content Categorization</article-title>. <source><italic>2006 IEEE/WIC/ACM International Conference on Web Intelligence (WI 2006 Main Conference Proceedings)(WI'06)</italic></source>, <fpage>277</fpage>–<lpage>283</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1109/WI.2006.170">https://doi.org/10.1109/WI.2006.170</ext-link></mixed-citation></ref>
      <ref id="bib.bibx49"><mixed-citation publication-type="journal"><string-name><surname>Zhang</surname>, <given-names>Y.</given-names></string-name>, <string-name><surname>Wang</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Li</surname>, <given-names>Q.</given-names></string-name>, <string-name><surname>Tiwari</surname>, <given-names>P.</given-names></string-name>, &amp; <string-name><surname>Qin</surname>, <given-names>J.</given-names></string-name> (<year>2025</year>). <article-title>Pushing The Limit of LLM Capacity for Text Classification</article-title>. <source><italic>Companion Proceedings of the ACM on Web Conference 2025</italic></source>, <fpage>1524</fpage>–<lpage>1528</lpage>. <ext-link ext-link-type="doi" xlink:href="https://doi.org/10.1145/3701716.3715528">https://doi.org/10.1145/3701716.3715528</ext-link></mixed-citation></ref>
    </ref-list>
    <app-group>
      <app id="A1">
      
      
      
      
    <title>Appendix A Topic-Level F1 Scores</title><p id="A1.p1">Table <xref rid="A1.T3" ref-type="table">A1</xref> presents detailed F1 scores for each topic across all model input feature combinations for the three benchmark datasets. These granular results complement the aggregate performance metrics reported in the main text, revealing how classification accuracy varies by both topic and input feature type. For the HuffPost dataset with its 14 fine-grained topics, performance variations are most pronounced, while the News Aggregator and RecognaSumm datasets show more consistent performance across their broader topic categories. The table further demonstrates that URL-based features achieve competitive or superior performance compared to article text features across topic categories.</p><table-wrap id="A1.T3"><label>Table A1:</label><caption><title>F1 scores for all combinations of topics and input features across datasets, with the highest F1 scores in bold.</title></caption>
        
        
        
        
      <table-wrap-foot><p>!</p></table-wrap-foot></table-wrap></app>
      <app id="A2">
      
      
      
      
    <title>Appendix B Date Ablation Analysis</title><p id="A2.p1">To assess whether models were learning spurious associations between publication dates and topics, I conducted an ablation study removing all date components (e.g., YYYY/MM/DD patterns) from URL paths, then training and evaluating performance of logistic regression models on the same train and test splits. Table <xref rid="A2.T4" ref-type="table">B1</xref> compares model performance with and without dates across all three datasets. The minimal performance differences confirm that semantic content in URL paths, rather than temporal patterns, drives classification accuracy.</p><table-wrap id="A2.T4"><label>Table B1:</label><caption><title>Impact of date removal on logistic regression classifier performance using raw URL paths. Performance remains largely stable across datasets when dates are excluded, indicating minimal reliance on temporal patterns.</title></caption>
        
        
        
        
      
<table>
<thead>
<tr>
<th><bold>Dataset</bold></th>
<th><bold>F1 (With Dates)</bold></th>
<th><bold>F1 (No Dates)</bold></th>
<th><inline-formula><mml:math id="A2.T4.m1" alttext="\Delta" display="inline"><mml:mi mathcolor="#000000" mathvariant="normal">Δ</mml:mi></mml:math></inline-formula><bold> F1</bold></th></tr>
</thead>
<tbody>
<tr>
<td>HuffPost</td>
<td>0.584</td>
<td>0.576</td>
<td><inline-formula><mml:math id="A2.T4.m2" alttext="-0.009" display="inline"><mml:mrow><mml:mo mathcolor="#000000">−</mml:mo><mml:mn mathcolor="#000000">0.009</mml:mn></mml:mrow></mml:math></inline-formula></td></tr>
<tr>
<td>News Aggregator</td>
<td>0.861</td>
<td>0.861</td>
<td><inline-formula><mml:math id="A2.T4.m3" alttext="+0.000" display="inline"><mml:mrow><mml:mo mathcolor="#000000">+</mml:mo><mml:mn mathcolor="#000000">0.000</mml:mn></mml:mrow></mml:math></inline-formula></td></tr>
<tr>
<td>RecognaSumm</td>
<td>0.926</td>
<td>0.909</td>
<td><inline-formula><mml:math id="A2.T4.m4" alttext="-0.016" display="inline"><mml:mrow><mml:mo mathcolor="#000000">−</mml:mo><mml:mn mathcolor="#000000">0.016</mml:mn></mml:mrow></mml:math></inline-formula></td></tr>
<tr>
<td><italic>Average</italic></td>
<td><italic>0.790</italic></td>
<td><italic>0.782</italic></td>
<td><inline-formula><mml:math id="A2.T4.m5" alttext="-0.008" display="inline"><mml:mrow><mml:mo mathcolor="#000000">−</mml:mo><mml:mn mathcolor="#000000">0.008</mml:mn></mml:mrow></mml:math></inline-formula></td></tr>
</tbody>
</table></table-wrap></app>
    </app-group>
  </back>
</article>