CNERG - Complex Networks Research Group, IIT Kharagpur on LinkedIn: THE WEB CONFERENCE 2024 IN SINGAPORE (2024)

CNERG - Complex Networks Research Group, IIT Kharagpur

2,361 followers

  • Report this post

Paper accepted @ TheWebConference 2023 (https://lnkd.in/ecJD8dnj)Heartiest congratulations for the acceptance of the following work in TheWebConference 2023. The title and and list of authors of the work are as follows:Title: HateProof: Are Hateful Meme Detection Systems really Robust?Authors: Piush Aggarwal, Pranit Chawla, Mithun Das, Punyajoy Saha, Dr. Binny Mathew, Torsten Zesch and Animesh Mukherjee

THE WEB CONFERENCE 2024 IN SINGAPORE www2024.thewebconf.org

18

Like Comment

To view or add a comment, sign in

More Relevant Posts

    • Report this post

    Paper accepted in IJCAI 2024 (https://ijcai24.org/)Heartiest congratulations for the acceptance of the following work in IJCAI 2024. The title, list of authors and summary of the work are as follows:Title: MEDVOC: Vocabulary Adaptation for Fine-tuning Pre-trained Language Models on Medical Text SummarizationAuthors: Gunjan Balde,Soumyadeep Roy, Mainack Mondal, Niloy GangulySummary: This work presents a dynamic vocabulary adaptation strategy, MEDVOC, for fine-tuning pre-trained language models (PLMs) like BertSumAbs, BART, and PEGASUS for improved medical text summarization. In contrast to existing domain adaptation approaches in summarization, MEDVOC treats vocabulary as an optimizable parameter and optimizes the PLM vocabulary based on fragment score conditioned only on the downstream task’s reference summaries. Unlike previous works on vocabulary adaptation (limited only to classification tasks), optimizing vocabulary based on summarization tasks requires an additional, extremely costly intermediate fine-tuning step on large summarization datasets. To that end, our novel fragment score-based hyperparameter search very significantly reduces this fine-tuning time—from 450 days to less than 2 days on average. Furthermore, while previous works on vocabulary adaptation are often primarily tied to single PLMs, MEDVOC is designed to be deployable across multiple PLMs (with varying model vocabulary sizes, pre-training objectives, and model sizes) --bridging the limited vocabulary overlap between the biomedical literature domain and PLMs. MEDVOC outperforms baselines by 15.74% in terms of Rouge-L in zero-shot setting and shows gains of 17.28% in high Out-Of-Vocabulary (OOV) concentrations. Our human evaluation shows MEDVOC generates more faithful medical summaries (88% compared to 59% in baselines).

    Home https://ijcai24.org

    52

    3 Comments

    Like Comment

    To view or add a comment, sign in

  • CNERG - Complex Networks Research Group, IIT Kharagpur

    2,361 followers

    • Report this post

    Paper accepted in ACM CSCW 2024 (https://cscw.acm.org/2024/)Heartiest congratulations for the acceptance of the following work in ACM CSCW 2024. The title, list of authors and abstract of the work are as follows:Title: Investigating Nudges toward Related Sellers on E-commerce Marketplaces: A Case Study on AmazonAuthors: Abhisek Dash, Abhijnan Chakraborty, Saptarshi Ghosh, Animesh Mukherjee, Krishna GummadiSummary: In this work we show how the Amazon buybox algorithm is biased to favoring special merchants (where Amazon has equity stakes) and other related sellers who use its subsidiary services. We also show how the review striking through policy for the related sellers inflates their ratings and positive feedback, thus marginalising the already marginalised small-scale sellers.

    CSCW 2024 https://cscw.acm.org/2024

    35

    1 Comment

    Like Comment

    To view or add a comment, sign in

  • CNERG - Complex Networks Research Group, IIT Kharagpur

    2,361 followers

    • Report this post

    Paper accepted in ACM TWEB Journal (https://lnkd.in/dSMdH8Wh)Heartiest congratulations for the acceptance of the following work in ACM TWEB. The details of the work are as follows:Title: MuLX-QA: Classifying Multi-Labels and Extracting Rationale Spans in Social Media PostsAuthors: Soham Poddar, Rajdeep Mukherjee, Azlaan Mustafa Samad, Niloy Ganguly, Saptarshi Ghosh Abstract: While social media platforms play an important role in our daily lives in obtaining the latest news and trends from across the globe, they are known to be prone to widespread proliferation of harmful information in different forms leading to misconceptions among the masses. Accordingly, several prior works have attempted to tag social media posts with labels/classes reflecting their veracity, sentiments, hate content, etc. However, in order to have a convincing impact, it is important to additionally extract the post snippets on which the labelling decision is based. We call such a post snippet as the ‘rationale’. These rationales significantly improve human trust and debuggability of the predictions, especially when detecting misinformation or stigmas from social media posts. These rationale spans or snippets are also helpful in post-classification social analysis, such as for finding out the target communities in hate-speech, or for understanding the arguments or concerns against the intake of vaccines. Also it is observed that a post may express multiple notions of misinformation, hate, sentiment, etc. Thus, the task of determining (one or multiple) labels for a given piece of text, along with the text snippets explaining the rationale behind each of the identified labels is a challenging multi-label, multi-rationale classification task, which is still nascent in the literature.While transformer-based encoder-decoder generative models such as BART and T5 are well-suited for the task, in this work we show how a relatively simpler encoder-only discriminative question-answering (QA) model can be effectively trained using simple template-based questions to accomplish the task. We thus propose MuLX-QA and demonstrate its utility in producing (label, rationale span) pairs in two different settings: multi-class (on the HateXplain dataset related to hate speech on social media), and multi-label (on the CAVES dataset related to COVID-19 anti-vaccine concerns). MuLX-QA outperforms heavier generative models in both settings. We also demonstrate the relative advantage of our proposed model MuLX-QA over strong baselines when trained with limited data. We perform several ablation studies, and experiments to better understand the effect of training MuLX-QA with different question prompts, and draw interesting inferences. Additionally, we show that MuLX-QA is effective on social media posts in resource-poor non-English languages as well. Finally, we perform a qualitative analysis of our model predictions and compare them with those of our strongest baseline.

    ACM TRANSACTIONS ON THE WEB Home dl.acm.org

    22

    Like Comment

    To view or add a comment, sign in

CNERG - Complex Networks Research Group, IIT Kharagpur on LinkedIn: THE WEB CONFERENCE 2024 IN SINGAPORE (17)

CNERG - Complex Networks Research Group, IIT Kharagpur on LinkedIn: THE WEB CONFERENCE 2024 IN SINGAPORE (18)

2,361 followers

View Profile

Follow

Explore topics

  • Sales
  • Marketing
  • Business Administration
  • HR Management
  • Content Management
  • Engineering
  • Soft Skills
  • See All
CNERG - Complex Networks Research Group, IIT Kharagpur on LinkedIn: THE WEB CONFERENCE 2024 IN SINGAPORE (2024)

References

Top Articles
Latest Posts
Article information

Author: Sen. Emmett Berge

Last Updated:

Views: 5508

Rating: 5 / 5 (60 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Sen. Emmett Berge

Birthday: 1993-06-17

Address: 787 Elvis Divide, Port Brice, OH 24507-6802

Phone: +9779049645255

Job: Senior Healthcare Specialist

Hobby: Cycling, Model building, Kitesurfing, Origami, Lapidary, Dance, Basketball

Introduction: My name is Sen. Emmett Berge, I am a funny, vast, charming, courageous, enthusiastic, jolly, famous person who loves writing and wants to share my knowledge and understanding with you.