. State Key Laboratory of Cognitive Intelligence & University of Science and Technology of China, Hefei 230000, China . Department of Data Science, City University of Hong Kong, Hongkong 999077, China . Jarvis Research Center, Tencent YouTu Lab, Beijing 100029, China . Anhui Conch Information Technology Engineering Co., Ltd., Wuhu 241000, China
Information Extraction (IE) aims to extract structural knowledge from plain natural language texts. Recently, generative Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation. As a result, numerous works have been proposed to integrate LLMs for IE tasks based on a generative paradigm. To conduct a comprehensive systematic review and exploration of LLM efforts for IE tasks, in this study, we survey the most recent advancements in this field. We first present an extensive overview by categorizing these works in terms of various IE subtasks and techniques, and then we empirically analyze the most advanced methods and discover the emerging trend of IE tasks with LLMs. Based on a thorough review conducted, we identify several insights in technique and promising research directions that deserve further exploration in future studies. We maintain a public repository and consistently update related works and resources on GitHub (LLM4IE repository).
Just Accepted Date: 10 October 2024Issue Date: 08 November 2024
Cite this article:
Derong XU,Wei CHEN,Wenjun PENG, et al. Large language models for generative information extraction: a survey[J]. Front. Comput. Sci.,
2024, 18(6): 186357.
Fig.1 LLMs have been extensively explored for generative IE. These studies encompass various IE techniques, specialized frameworks designed for a single subtask, and universal frameworks capable of addressing multiple subtasks simultaneously
Fig.2 Taxonomy of research in generative IE using LLMs. Some papers have been omitted due to space limitations
Fig.3 Examples of different IE tasks
Representative model
Paradigm
Uni.
Backbone
ACE04
ACE05
CoNLL03
Onto. 5
GENIA
DEEPSTRUCT [151]
CDL
GLM-10B
−
28.1
44.4
42.5
47.2
Xie et al. [63]
ZS Pr
GPT-3.5-turbo
−
32.27
74.51
−
52.06
CODEIE [36]
ICL
Code-davinci-002
55.29
54.82
82.32
−
−
Code4UIE [6]
ICL
Text-davinci-003
60.1
60.9
83.6
−
−
PromptNER [43]
ICL
GPT-4
−
−
83.48
−
58.44
Xie et al. [63]
ICL
GPT-3.5-turbo
−
55.54
84.51
−
58.72
GPT-NER [42]
ICL
Text-davinci-003
74.2
73.59
90.91
82.2
64.42
TANL [33]
SFT
T5-base
−
84.9
91.7
89.8
76.4
Cui et al. [53]
SFT
BART
−
−
92.55
−
−
Yan et al. [37]
SFT
BART-large
86.84
84.74
93.24
90.38
79.23
UIE [4]
SFT
T5-large
86.89
85.78
92.99
−
−
DEEPSTRUCT [151]
SFT
GLM-10B
−
86.9
93.0
87.8
80.8
Xia et al. [56]
SFT
BART-large
87.63
86.22
93.48
90.63
79.49
InstructUIE [192]
SFT
Flan-T5-11B
−
86.66
92.94
90.19
74.71
UniNER [64]
SFT
LLaMA-7B
87.5
87.6
−
89.1
80.6
GoLLIE [32]
SFT
Code-LLaMA-34B
−
89.6
93.1
84.6
−
EnTDA [58]
DA
T5-base
88.21
87.56
93.88
91.34
82.25
YAYI-UIE [155]
SFT
Baichuan2-13B
−
81.78
96.77
87.04
75.21
ToNER [88]
SFT
Flan-T5-3B
88.09
86.68
93.59
91.30
−
KnowCoder [160]
SFT
LLaMA2-7B
86.2
86.1
95.1
88.2
76.7
GNER [67]
SFT
Flan-T5-11B
−
−
93.28
91.83
−
USM [30]
SFT
RoBERTa-large
87.62
87.14
93.16
−
−
RexUIE [197]
SFT
DeBERTa-v3-large
87.25
87.23
93.67
−
−
Mirror [198]
SFT
DeBERTa-v3-large
87.16
85.34
92.73
−
−
Tab.1 Comparison of LLMs for named entity recognition (identification & typing) with the Micro-F1 metric (%). indicates that the model is discriminative. We demonstrate some universal and discriminative models for comparison. IE techniques include Cross-Domain Learning (CDL), Zero-Shot Prompting (ZS Pr), In-Context Learning (ICL), Supervised Fine-Tuning (SFT), Data Augmentation (DA). Uni. denotes whether the model is universal. Onto. 5 denotes the OntoNotes 5.0. Details of datasets and backbones are presented in Section 8. The settings for all subsequent tables are consistent with this format
Representative model
Technique
Uni.
Backbone
NYT
ACE05
ADE
CoNLL04
SciERC
CodeKGC [159]
ZS Pr
Text-davinci-003
−
−
42.8
35.9
15.3
CODEIE [36]
ICL
Code-davinci-002
32.17
14.02
−
53.1
7.74
CodeKGC [159]
ICL
Text-davinci-003
−
−
64.6
49.8
24.0
Code4UIE [6]
ICL
Text-davinci-002
54.4
17.5
58.6
54.4
−
REBEL [39]
SFT
BART-large
91.96
−
82.21
75.35
−
UIE [4]
SFT
T5-large
−
66.06
−
75.0
36.53
InstructUIE [5]
SFT
Flan-T5-11B
90.47
−
82.31
78.48
45.15
GoLLIE [32]
SFT
Code-LLaMA-34B
−
70.1
−
−
−
YAYI-UIE [155]
SFT
Baichuan2-13B
89.97
−
84.41
79.73
40.94
KnowCoder [160]
SFT
LLaMA2-7B
93.7
64.5
84.8
73.3
40.0
USM [30]
SFT
RoBERTa-large
−
67.88
−
78.84
37.36
RexUIE [197]
SFT
DeBERTa-v3-large
−
64.87
−
78.39
38.37
Tab.2 Comparison of LLMs for relation extraction with the “relation strict” [4] Micro-F1 metric (%). indicates that the model is discriminative
Representative model
Technique
Uni.
Backbone
TACRED
Re-TACRED
TACREV
SemEval
QA4RE [98]
ZS Pr
Text-davinci-003
59.4
61.2
59.4
43.3
SUMASK [96]
ZS Pr
GPT-3.5-turbo-0301
79.6
73.8
75.1
−
GPT-RE [99]
ICL
Text-davinci-003
72.15
−
−
91.9
Xu et al. [44]
ICL
Text-davinci-003
31.0
51.8
31.9
−
REBEL [39]
SFT
BART-large
−
90.36
−
−
Xu et al. [44]
DA
Text-davinci-003
37.4
66.2
41.0
−
Tab.3 Comparison of LLMs for relation classification with the Micro-F1 metric (%)
Representative model
Technique
Uni.
Backbone
Trg-I
Trg-C
Arg-I
Arg-C
Code4Struct [41]
ZS Pr
Code-davinci-002
−
−
50.6
36.0
Code4UIE [6]
ICL
GPT-3.5-turbo-16k
−
37.4
−
21.3
Code4Struct [41]
ICL
Code-davinci-002
−
−
62.1
58.5
TANL [33]
SFT
T5-base
72.9
68.4
50.1
47.6
Text2Event [131]
SFT
T5-large
−
71.9
−
53.8
BART-Gen [130]
SFT
BART-large
−
−
69.9
66.7
UIE [4]
SFT
T5-large
−
73.36
−
54.79
GTEE-DYNPREF [135]
SFT
BART-large
−
72.6
−
55.8
DEEPSTRUCT [151]
SFT
GLM-10B
73.5
69.8
59.4
56.2
PAIE [134]
SFT
BART-large
−
−
75.7
72.7
PGAD [137]
SFT
BART-base
−
−
74.1
70.5
QGA-EE [138]
SFT
T5-large
−
−
75.0
72.8
InstructUIE [5]
SFT
Flan-T5-11B
−
77.13
−
72.94
GoLLIE [32]
SFT
Code-LLaMA-34B
−
71.9
−
68.6
YAYI-UIE [155]
SFT
Baichuan2-13B
−
65.0
−
62.71
KnowCoder [160]
SFT
LLaMA2-7B
−
74.2
−
70.3
USM [30]
SFT
RoBERTa-large
−
72.41
−
55.83
RexUIE [197]
SFT
DeBERTa-v3-large
−
75.17
−
59.15
Mirror [198]
SFT
DeBERTa-v3-large
−
74.44
−
55.88
Tab.4 Comparison of Micro-F1 Values for Event Extraction on ACE05. Evaluation tasks include: Trigger Identification (Trg-I), Trigger Classification (Trg-C), Argument Identification (Arg-I), and Argument Classification (Arg-C). indicates that the model is discriminative
Fig.4 The comparison of prompts of NL-LLMs and Code-LLMs for universal IE. Both NL-based and code-based methods attempt to construct a universal schema, but they differ in terms of prompt format and the way they utilize the generation capabilities of LLMs. This figure is adopted from [5] and [6]
Fig.5 Comparison of data augmentation methods
Domain
Method
Task
Paradigm
Backbone
Multimodal
Cai et al. [57]
NER
ICL
GPT-3.5
PGIM [167]
NER
DA
BLIP2, GPT-3.5
RiVEG [94]
NER
DA
Vicuna, LLaMA2, GPT-3.5
Chen et al. [166]
NER, RE
DA
BLIP2, GPT-3.5, GPT-4
Multilingual
Meoni et al. [163]
NER
DA
Text-davinci-003
Naguib et al. [83]
NER
ICL
−
Huang et al. [133]
EE
CDL
mBART, mT5
Medical
Bian et al. [171]
NER
DA
GPT-3.5
Hu et al. [184]
NER
ZS Pr
GPT-3.5, GPT-4
Meoni et al. [163]
NER
DA
Text-davinci-003
Naguib et al. [83]
NER
ICL
−
VANER [188]
NER
SFT
LLaMA2
RT [91]
NER
ICL
GPT-4
Munnangi et al. [85]
NER
ZS Pr, ICL, FS FT
GPT-3.5, GPT-4, Claude-2, LLaMA2
Monajatipoor et al. [179]
NER
SFT, ICL
−
Hu et al. [172]
NER
ZS Pr, ICL
GPT-3.5, GPT-4
Gutiérrez et al. [187]
NER, RE
ICL
GPT-3
GPT3+R [185]
NER, RE
−
Text-davinci-002
Labrak et al. [186]
NER, RE
−
GPT-3.5, Flan-UL2, Tk-Insturct, Alpaca
Tang et al. [162]
NER, RE
DA
GPT-3.5
DICE [183]
EE
SFT
T5-Large
Scientific
Bölücü et al. [80]
NER
ICL
GPT-3.5
Dunn et al. [180]
NER, RE
SFT
GPT-3
PolyIE [181]
NER, RE
ICL
GPT-3.5, GPT-4
Foppiano et al. [47]
NER, RE
ZS Pr, ICL, SFT
GPT-3.5, GPT-4
Dagdelen et al. [182]
NER, RE
SFT
GPT-3, LLaMA2
Astronomical
Shao et al. [173]
NER
ZS Pr
GPT-3.5, GPT-4, Claude-2, LLaMA2
Evans et al. [164]
NER
DA
GPT-3.5, GPT-4
Historical
González-Gallardo et al. [189]
NER
ZS Pr
GPT-3.5
CHisIEC [126]
NER, RE
SFT, ICL
ChatGLM2, Alpaca2, GPT-3.5
Legal
Nunes et al. [89]
NER
ICL
Sabia
Oliveira et al. [78]
NER
DA
GPT-3
Kwak et al. [115]
RE, EE
ICL
GPT-4
Tab.5 The statistics of research in specific domain
Dataset
Summary
CoNLL03 [239]
Dataset scope: NER;1,393 English news articles from Reuters;909 German news articles;4 annotated entity types.
Dataset scope: NER, RE and EE;various text types and genres;7 entity types; 7 relation types;33 event types and 22 argument roles.
Tab.6 A summary of some representative IE datasets
Task
Dataset
Domain
#Class
#Train
#Val
#Test
NER
ACE04 [242]
News
7
6,202
745
812
ACE05 [241]
News
7
7,299
971
1,060
BC5CDR [243]
Biomedical
2
4,560
4,581
4,797
Broad Twitter Corpus [244]
Social Media
3
6,338
1,001
2,000
CADEC [245]
Biomedical
1
5,340
1,097
1,160
CoNLL03 [239]
News
4
14,041
3,250
3,453
CoNLLpp [246]
News
4
14,041
3,250
3,453
CrossNER-AI [247]
Artificial Intelligence
14
100
350
431
CrossNER-Literature [247]
Literary
12
100
400
416
CrossNER-Music [247]
Musical
13
100
380
465
CrossNER-Politics [247]
Political
9
199
540
650
CrossNER-Science [247]
Scientific
17
200
450
543
FabNER [248]
Scientific
12
9,435
2,182
2,064
Few-NERD [249]
General
66
131,767
18,824
37,468
FindVehicle [250]
Traffic
21
21,565
20,777
20,777
GENIA [251]
Biomedical
5
15,023
1,669
1,854
HarveyNER [252]
Social Media
4
3,967
1,301
1,303
MIT-Movie [253]
Social Media
12
9,774
2,442
2,442
MIT-Restaurant [253]
Social Media
8
7,659
1,520
1,520
MultiNERD [254]
Wikipedia
16
134,144
10,000
10,000
NCBI [255]
Biomedical
4
5,432
923
940
OntoNotes 5.0 [256]
General
18
59,924
8,528
8,262
ShARe13 [257]
Biomedical
1
8,508
12,050
9,009
ShARe14 [258]
Biomedical
1
17,404
1,360
15,850
SNAP* [259]
Social Media
4
4,290
1,432
1,459
TTC [260]
Social Meida
3
10,000
500
1,500
Tweebank-NER [261]
Social Media
4
1,639
710
1,201
Twitter2015* [262]
Social Media
4
4,000
1,000
3,357
Twitter2017* [259]
Social Media
4
3,373
723
723
TwitterNER7 [263]
Social Media
7
7,111
886
576
WikiDiverse* [264]
News
13
6,312
755
757
WNUT2017 [265]
Social Media
6
3,394
1,009
1,287
RE
ACE05 [241]
News
7
10,051
2,420
2,050
ADE [266]
Biomedical
1
3,417
427
428
CoNLL04 [240]
News
5
922
231
288
DocRED [267]
Wikipedia
96
3,008
300
700
MNRE* [268]
Social Media
23
12,247
1,624
1,614
NYT [269]
News
24
56,196
5,000
5,000
Re-TACRED [270]
News
40
58,465
19,584
13,418
SciERC [271]
Scientific
7
1,366
187
397
SemEval2010 [272]
General
19
6,507
1,493
2,717
TACRED [273]
News
42
68,124
22,631
15,509
TACREV [274]
News
42
68,124
22,631
15,509
EE
ACE05 [241]
News
33/22
17,172
923
832
CASIE [275]
Cybersecurity
5/26
11,189
1,778
3,208
GENIA11 [276]
Biomedical
9/11
8,730
1,091
1,092
GENIA13 [277]
Biomedical
13/7
4,000
500
500
PHEE [278]
Biomedical
2/16
2,898
961
968
RAMS [279]
News
139/65
7,329
924
871
WikiEvents [130]
Wikipedia
50/59
5,262
378
492
Tab.7 Statistics of common datasets for information extraction. denotes the dataset is multimodal. # refers to the number of categories or sentences. The data in the table is partially referenced from InstructUIE [192]
Series
Model
Size
Base model
Open source
Instruction tuning
RLHF
BART
BART [281]
140M (base), 400M (large)
−
−
−
T5
T5 [282]
60M, 220M (base), 770M (large), 3B, 11B
−
−
−
mT5 [283]
300M, 580M (base), 1.2B (large), 3.7B, 13B
−
−
−
Flan-T5 [284]
80M, 250M (base), 780M (large), 3B, 11B
T5
−
GLM
GLM [285]
110M (base), 335M (large), 410M, 515M, 2B, 10B
−
−
−
ChatGLM series
6B
GLM
LLaMA
LLaMA [286]
7B, 13B, 33B, 65B
−
−
−
Alpaca [287]
7B, 13B
LLaMA
−
Vicuna [288]
7B, 13B
LLaMA
−
LLaMA2 [289]
7B, 13B, 70B
−
−
−
LLaMA2-chat [289]
7B, 13B, 70B
LLaMA2
Code-LLaMA [290]
7B, 13B, 34B
LLaMA2
−
−
LLaMA3 series
8B, 70B, 405B
−
GPT
GPT-2 [291]
117M, 345M, 762M, 1.5B
−
−
−
GPT-3 [292]
175B
−
−
−
−
GPT-J [293]
6B
GPT-3
−
−
Code-davinci-002 [294]
−
GPT-3
−
−
Text-davinci-002 [294]
−
GPT-3
−
−
Text-davinci-003 [294]
−
GPT-3
−
GPT-3.5-turbo series [200]
−
−
−
GPT-4 series [9]
−
−
−
Tab.8 The common backbones for generative information extraction. We mark the commonly used base and large versions for better reference
1
L, Zhong J, Wu Q, Li H, Peng X Wu . A comprehensive survey on automatic knowledge graph construction. ACM Computing Surveys, 2024, 56( 4): 94
2
C, Fu T, Chen M, Qu W, Jin X Ren . Collaborative policy learning for open knowledge graph reasoning. In: Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019, 2672−2681
3
R K, Srihari W Li . Information extraction supported question answering. In: Proceedings of the 8th Text REtrieval Conference. 1999
4
Y, Lu Q, Liu D, Dai X, Xiao H, Lin X, Han L, Sun H Wu . Unified structure generation for universal information extraction. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022, 5755−5772
5
X, Wang W, Zhou C, Zu H, Xia T, Chen Y, Zhang R, Zheng J, Ye Q, Zhang T, Gui J, Kang J, Yang S, Li C Du . InstructUIE: multi-task instruction tuning for unified information extraction. 2023, arXiv preprint arXiv: 2304.08085
6
Y, Guo Z, Li X, Jin Y, Liu Y, Zeng W, Liu X, Li P, Yang L, Bai J, Guo X Cheng . Retrieval-augmented code generation for universal information extraction. 2023, arXiv preprint arXiv: 2311.02962
7
Y, Zhong T, Xu P Luo . Contextualized hybrid prompt-tuning for generation-based event extraction. In: Proceedings of the 16th International Conference on Knowledge Science, Engineering and Management. 2023, 374−386
8
S, Zhou B, Yu A, Sun C, Long J, Li J Sun . A survey on neural open information extraction: current status and future directions. In: Proceedings of the 31st International Joint Conference on Artificial Intelligence. 2022, 5694−5701
9
, OpenAIJ, Achiam S, Adler S, Agarwal L, Ahmad , et al.. GPT-4 technical report. 2023, arXiv preprint arXiv: 2303.08774
10
Q, Liu Y, He D, Lian Z, Zheng T, Xu C, Liu E Chen . UniMEL: a unified framework for multimodal entity linking with large language models. 2024, arXiv preprint arXiv: 2407.16160
11
W, Peng G, Li Y, Jiang Z, Wang D, Ou X, Zeng D, Xu T, Xu E Chen . Large language model based long-tail query rewriting in Taobao search. In: Companion Proceedings of the ACM Web Conference 2024. 2024, 20−28
12
C, Zhang H, Zhang S, Wu D, Wu T, Xu Y, Gao Y, Hu E Chen . NoteLLM-2: multimodal large representation models for recommendation. 2024, arXiv preprint arXiv: 2405.16789
13
P, Liu W, Yuan J, Fu Z, Jiang H, Hayashi G Neubig . Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 2023, 55( 9): 195
14
Y, Lyu Z, Li S, Niu F, Xiong B, Tang W, Wang H, Wu H, Liu T, Xu E Chen . CRUD-RAG: a comprehensive Chinese benchmark for retrieval-augmented generation of large language models. 2024, arXiv preprint arXiv: 2401.17043
15
Y, Lyu Z, Niu Z, Xie C, Zhang T, Xu Y, Wang E Chen . Retrieve-plan-generation: an iterative planning and answering framework for knowledge-intensive LLM generation. 2024, arXiv preprint arXiv: 2406.14979
16
P, Jia Y, Liu X, Zhao X, Li C, Hao S, Wang D Yin . MILL: mutual verification with large language models for zero-shot query expansion. In: Proceedings of 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2024, 2498−2518
17
M, Wang Y, Zhao J, Liu J, Chen C, Zhuang J, Gu R, Guo X Zhao . Large multimodal model compression via iterative efficient pruning and distillation. In: Companion Proceedings of the ACM Web Conference 2024. 2024, 235−244
18
Z, Fu X, Li C, Wu Y, Wang K, Dong X, Zhao M, Zhao H, Guo R Tang . A unified framework for multi-domain CTR prediction via large language models. 2023, arXiv preprint arXiv: 2312.10743
19
P, Jia Y, Liu X, Li X, Zhao Y, Wang Y, Du X, Han X, Wei S, Wang D Yin . G3: an effective and adaptive framework for worldwide geolocalization using large multi-modality models. 2024, arXiv preprint arXiv: 2405.14702
20
C, Zhang S, Wu H, Zhang T, Xu Y, Gao Y, Hu E Chen . NoteLLM: a retrievable large language model for note recommendation. In: Companion Proceedings of the ACM Web Conference 2024. 2024, 170−179
21
X, Wang Z, Chen Z, Xie T, Xu Y, He E Chen . In-context former: lightning-fast compressing context for large language model. 2024, arXiv preprint arXiv: 2406.13618
22
J, Zhu S, Liu Y, Yu B, Tang Y, Yan Z, Li F, Xiong T, Xu M B Blaschko . FastMem: fast memorization of prompt improves context awareness of large language models. 2024, arXiv preprint arXiv: 2406.16069
23
L, Wang C, Ma X, Feng Z, Zhang H, Yang J, Zhang Z, Chen J, Tang X, Chen Y, Lin W X, Zhao Z, Wei J Wen . A survey on large language model based autonomous agents. Frontiers of Computer Science, 2024, 18( 6): 186345
24
Z, Guan L, Wu H, Zhao M, He J Fan . Enhancing collaborative semantics of language model-driven recommendations via graph-aware learning. 2024, arXiv preprint arXiv: 2406.13235
25
Huang J, She Q, Jiang W, Wu H, Hao Y, Xu T, Wu F. QDMR-based planning-and-solving prompting for complex reasoning tasks. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024, 13395−13406
26
C, Fu Y, Dai Y, Luo L, Li S, Ren R, Zhang Z, Wang C, Zhou Y, Shen M, Zhang P, Chen Y, Li S, Lin S, Zhao K, Li T, Xu X, Zheng E, Chen R, Ji X Sun . Video-MME: the first-ever comprehensive evaluation benchmark of multi-modal LLMs in video analysis. 2024, arXiv preprint arXiv: 2405.21075
27
X, Li L, Su P, Jia X, Zhao S, Cheng J, Wang D Yin . Agent4Ranking: semantic robust ranking via personalized query rewriting using multi-agent LLM. 2023, arXiv preprint arXiv: 2312.15450
28
J, Qi C, Zhang X, Wang K, Zeng J, Yu J, Liu L, Hou J, Li X Bin . Preserving knowledge invariance: rethinking robustness evaluation of open information extraction. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 5876−5890
29
W, Chen L, Zhao P, Luo T, Xu Y, Zheng E Chen . HEProto: a hierarchical enhancing ProtoNet based on multi-task learning for few-shot named entity recognition. In: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 2023, 296−305
30
J, Lou Y, Lu D, Dai W, Jia H, Lin X, Han L, Sun H Wu . Universal information extraction as unified semantic matching. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence. 2023, 13318−13326
31
M, Josifoski Cao N, De M, Peyrard F, Petroni R West . GenIE: generative information extraction. In: Proceedings of 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2022, 4626−4643
32
O, Sainz I, García-Ferrero R, Agerri Lacalle O L, de G, Rigau E Agirre . GoLLIE: annotation guidelines improve zero-shot information-extraction. In: Proceedings of the ICLR 2024. 2024
33
G, Paolini B, Athiwaratkun J, Krone J, Ma A, Achille R, Anubhai Santos C N, dos B, Xiang S Soatto . Structured prediction as translation between augmented natural languages. In: Proceedings of the 9th International Conference on Learning Representations. 2021
34
C, Gan Q, Zhang T Mori . GIELLM: Japanese general information extraction large language model utilizing mutual reinforcement effect. 2023, arXiv preprint arXiv: 2311.06838
35
H, Fei S, Wu J, Li B, Li F, Li L, Qin M, Zhang M, Zhang T S Chua . LasUIE: unifying information extraction with latent adaptive structure-aware generative language model. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 1125
36
P, Li T, Sun Q, Tang H, Yan Y, Wu X, Huang X Qiu . CodeIE: large code generation models are better few-shot information extractors. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 15339−15353
37
H, Yan T, Gui J, Dai Q, Guo Z, Zhang X Qiu . A unified generative framework for various NER subtasks. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. 2021, 5808−5822
38
Huang K H, Tang S, Peng N. Document-level entity-based extraction as template generation. In: Proceedings of 2021 Conference on Empirical Methods in Natural Language Processing. 2021, 5257−5269
39
P L H, Cabot R Navigli . REBEL: relation extraction by end-to-end language generation. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021. 2021, 2370−2381
40
X, Wei X, Cui N, Cheng X, Wang X, Zhang S, Huang P, Xie J, Xu Y, Chen M, Zhang Y, Jiang W Han . ChatIE: zero-shot information extraction via chatting with ChatGPT. 2023, arXiv preprint arXiv: 2302.10205
41
X, Wang S, Li H Ji . Code4Struct: code generation for few-shot event structure prediction. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 3640−3663
42
S, Wang X, Sun X, Li R, Ouyang F, Wu T, Zhang J, Li G Wang . GPT-NER: named entity recognition via large language models. 2023, arXiv preprint arXiv: 2304.10428
43
D, Ashok Z C Lipton . PromptNER: prompting for named entity recognition. 2023, arXiv preprint arXiv: 2305.15444
44
X, Xu Y, Zhu X, Wang N Zhang . How to unleash the power of large language models for few-shot relation extraction? In: Proceedings of the 4th Workshop on Simple and Efficient Natural Language Processing. 2023, 190−200
45
Z, Nasar S W, Jaffry M K Malik . Named entity recognition and relation extraction: state-of-the-art. ACM Computing Surveys, 2022, 54( 1): 20
46
H, Ye N, Zhang H, Chen H Chen . Generative knowledge graph construction: a review. In: Proceedings of 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 1−17
47
L, Foppiano G, Lambard T, Amagasa M Ishii . Mining experimental data from materials science literature with large language models: an evaluation study. Science and Technology of Advanced Materials: Methods, 2024, 4( 1): 2356506
48
H, Liu W, Xue Y, Chen D, Chen X, Zhao K, Wang L, Hou R, Li W Peng . A survey on hallucination in large vision-language models. 2024, arXiv preprint arXiv: 2402.00253
49
P, Sahoo A K, Singh S, Saha V, Jain S, Mondal A Chadha . A systematic survey of prompt engineering in large language models: techniques and applications. 2024, arXiv preprint arXiv: 2402.07927
50
D, Xu Z, Zhang Z, Zhu Z, Lin Q, Liu X, Wu T, Xu W, Wang Y, Ye X, Zhao E, Chen Y Zheng . Editing factual knowledge and explanatory ability of medical large language models. 2024, arXiv preprint arXiv: 2402.18099
51
Yuan S, Yang D, Liang J, Li Z, Liu J, Huang J, Xiao Y. Generative entity typing with curriculum learning. In: Proceedings of 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 3061−3073
52
Y, Feng A, Pratapa D Mortensen . Calibrated seq2seq models for efficient and generalizable ultra-fine entity typing. In: Proceedings of the Findings of the Association for Computational Linguistics. 2023, 15550−15560
53
L, Cui Y, Wu J, Liu S, Yang Y Zhang . Template-based named entity recognition using BART. In: Proceedings of the Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. 2021, 1835−1845
54
S, Zhang Y, Shen Z, Tan Y, Wu W Lu . De-bias for generative extraction in unified NER task. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022, 808−818
55
L, Wang R, Li Y, Yan Y, Yan S, Wang W, Wu W Xu . InstructionNER: a multi-task instruction-based generative framework for few-shot NER. 2022, arXiv preprint arXiv: 2203.03903
56
Y, Xia Y, Zhao W, Wu S Li . Debiasing generative named entity recognition by calibrating sequence likelihood. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 1137−1148
57
C, Cai Q, Wang B, Liang B, Qin M, Yang K F, Wong R Xu . In-context learning for few-shot multimodal named entity recognition. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 2969−2979
58
X, Hu Y, Jiang A, Liu Z, Huang P, Xie F, Huang L, Wen P S Yu . Entity-to-text based data augmentation for various named entity recognition tasks. In: Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023. 2023, 9072−9087
59
A, Amalvy V, Labatut R Dufour . Learning to rank context for named entity recognition using a synthetic dataset. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 10372−10382
60
X, Chen L, Li S, Qiao N, Zhang C, Tan Y, Jiang F, Huang H Chen . One model for all domains: collaborative domain-prefix tuning for cross-domain NER. In: Proceedings of the 32nd International Joint Conference on Artificial Intelligence. 2023, 559
61
R, Zhang Y, Li Y, Ma M, Zhou L Zou . LLMaAA: making large language models as active annotators. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 13088−13103
62
Y, Ma Y, Cao Y, Hong A Sun . Large language model is not a good few-shot information extractor, but a good reranker for hard samples! In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 10572−10601
63
Xie T, Li Q, Zhang Y, Liu Z, Wang H. Self-improving for zero-shot named entity recognition with large language models. In: Proceedings of 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2024, 583−593
64
W, Zhou S, Zhang Y, Gu M, Chen H Poon . UniversalNER: targeted distillation from large language models for open named entity recognition. In: Proceedings of the 12th International Conference on Learning Representations. 2024
65
X, Zhang M, Tan J, Zhang W Zhu . NAG-NER: a unified non-autoregressive generation framework for various NER tasks. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 676−686
66
J, Su H Yu . Unified named entity recognition as multi-label sequence generation. In: Proceedings of 2023 International Joint Conference on Neural Networks. 2023, 1−8
67
Y, Ding J, Li P, Wang Z, Tang Y, Bowen M Zhang . Rethinking negative instances for generative named entity recognition. In: Proceedings of the Findings of the Association for Computational Linguistics ACL 2024. 2024, 3461−3475
68
S, Bogdanov A, Constantin T, Bernard B, Crabbé E Bernard . NuNER: entity recognition encoder pre-training via LLM-annotated data. 2024, arXiv preprint arXiv: 2402.15343
69
J, Chen Y, Lu H, Lin J, Lou W, Jia D, Dai H, Wu B, Cao X, Han L Sun . Learning in-context learning for named entity recognition. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 13661−13675
70
Z, Zhang Y, Zhao H, Gao M Hu . LinkNER: linking local named entity recognition models to large language models using uncertainty. In: Proceedings of the ACM Web Conference 2024. 2024, 4047−4058
71
X, Tang J, Wang Q Su . Small language model is a good guide for large language model in Chinese entity relation extraction. 2024, arXiv preprint arXiv: 2402.14373
72
N, Popovič M Färber . Embedded named entity recognition using probing classifiers. 2024, arXiv preprint arXiv: 2403.11747
73
Y, Heng C, Deng Y, Li Y, Yu Y, Li R, Zhang C Zhang . ProgGen: generating named entity recognition datasets step-by-step with self-reflexive large language models. In: Proceedings of the Findings of the Association for Computational Linguistics ACL 2024. 2024, 15992−16030
74
Y, Mo J, Yang J, Liu S, Zhang J, Wang Z Li . C-ICL: contrastive in-context learning for information extraction. 2024, arXiv preprint arXiv: 2402.11254
75
V K, Keloth Y, Hu Q, Xie X, Peng Y, Wang A, Zheng M, Selek K, Raja C H, Wei Q, Jin Z, Lu Q, Chen H Xu . Advancing entity recognition in biomedicine via instruction tuning of large language models. Bioinformatics, 2024, 40( 4): btae163
76
S, Kim K, Seo H, Chae J, Yeo D Lee . VerifiNER: verification-augmented NER via knowledge-grounded reasoning with large language models. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024, 2441−2461
77
Y, Li R, Ramprasad C Zhang . A simple but effective approach to improve structured language model output for information extraction. 2024, arXiv preprint arXiv: 2402.13364
78
Oliveira V, Nogueira G, Faleiros T, Marcacini R. Combining prompt-based language models and weak supervision for labeling named entity recognition on legal documents. Artificial Intelligence and Law, 2024: 1-21
79
J, Lu Z, Yang Y, Wang X, Liu B M, Namee C Huang . PaDeLLM-NER: parallel decoding in large language models for named entity recognition. 2024, arXiv preprint arXiv: 2402.04838
80
N, Bölücü M, Rybinski S Wan . Impact of sample selection on in-context learning for entity extraction from scientific writing. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 5090−5107
81
J, Liu J, Wang H, Huang R, Zhang M, Yang T Zhao . Improving LLM-based health information extraction with in-context learning. In: Proceedings of the 9th China Health Information Processing Conference. 2024, 49−59
82
C, Wu W, Ke P, Wang Z, Luo G, Li W Chen . ConsistNER: towards instructive NER demonstrations for LLMs with the consistency of ontology and context. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. 2024, 19234−19242
83
M, Naguib X, Tannier A Névéol . Few-shot clinical entity recognition in English, French and Spanish: masked language models outperform generative model prompting. 2024, arXiv preprint arXiv: 2402.12801
84
U, Zaratiana N, Tomeh P, Holat T Charnois . GliNER: generalist model for named entity recognition using bidirectional transformer. In: Proceedings of 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2024, 5364−5376
85
M, Munnangi S, Feldman B, Wallace S, Amir T, Hope A Naik . On-the-fly definition augmentation of LLMs for biomedical NER. In: Proceedings of 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2024, 3833−3854
86
M, Zhang B, Wang H, Fei M Zhang . In-context learning for few-shot nested named entity recognition. 2024, arXiv preprint arXiv: 2402.01182
87
F, Yan P, Yu X Chen . LTNER: Large language model tagging for named entity recognition with contextualized entity marking. 2024, arXiv preprint arXiv: 2404.05624
88
G, Jiang Z, Luo Y, Shi D, Wang J, Liang D Yang . ToNER: type-oriented named entity recognition with generative language model. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024, 16251−16262
89
R O, Nunes A S, Spritzer Sasso Freitas C, Dal D S Balreira . Out of sesame street: a study of portuguese legal named entity recognition through in-context learning. In: Proceedings of the 26th International Conference on Enterprise Information Systems. 2024
90
W, Hou W, Zhao X, Liu W Guo . Knowledge-enriched prompt for low-resource named entity recognition. ACM Transactions on Asian and Low-Resource Language Information Processing, 2024, 23( 5): 72
91
M, Li H, Zhou H, Yang R Zhang . RT: a retrieving and chain-of-thought framework for few-shot medical named entity recognition. Journal of the American Medical Informatics Association, 2024, 13( 9): 1929–1938
92
G, Jiang Z, Ding Y, Shi D Yang . P-ICL: point in-context learning for named entity recognition with large language models. 2024, arXiv preprint arXiv: 2405.04960
93
Xie T, Zhang J, Zhang Y, Liang Y, Li Q, Wang H. Retrieval augmented instruction tuning for open ner with large language models. 2024, arXiv preprint arXiv:2406.17305
94
J, Li H, Li D, Sun J, Wang W, Zhang Z, Wang G Pan . LLMs as bridges: reformulating grounded multimodal named entity recognition. In: Proceedings of the Findings of the Association for Computational Linguistics ACL 2024. 2024, 1302−1318
95
J, Ye N, Xu Y, Wang J, Zhou Q, Zhang T, Gui X Huang . LLM-DA: data augmentation via large language models for few-shot named entity recognition. 2024, arXiv preprint arXiv: 2402.14568
96
G, Li P, Wang W Ke . Revisiting large language models as zero-shot relation extractors. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 6877−6892
97
C, Pang Y, Cao Q, Ding P Luo . Guideline learning for in-context information extraction. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 15372−15389
98
K, Zhang B J, Gutierrez Y Su . Aligning instruction tasks unlocks large language models as zero-shot relation extractors. In: Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023. 2023, 794−812
99
Z, Wan F, Cheng Z, Mao Q, Liu H, Song J, Li S Kurohashi . GPT-RE: in-context learning for relation extraction using large language models. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 3534−3547
100
M D, Ma X, Wang P N, Kung P J, Brantingham N, Peng W Wang . STAR: boosting low-resource information extraction by structure-to-text data generation with large language models. In: Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 18751−18759
101
Q, Wang K, Zhou Q, Qiao Y, Li Q Li . Improving unsupervised relation extraction by augmenting diverse sentence pairs. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 12136−12147
102
B, Li D, Yu W, Ye J, Zhang S Zhang . Sequence generation with label augmentation for relation extraction. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence. 2023, 13043−13050
103
Q, Guo Y, Yang H, Yan X, Qiu Z Zhang . DORE: document ordered relation extraction based on generative framework. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022. 2022, 3463−3474
104
X, Ma J, Li M Zhang . Chain of thought with explicit evidence reasoning for few-shot relation extraction. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 2334−2352
105
S, Zhou Y, Meng B, Jin J Han . Grasping the essentials: tailoring large language models for zero-shot relation extraction. 2024, arXiv preprint arXiv: 2402.11142
106
J, Qi K, Ji X, Wang J, Yu K, Zeng L, Hou J, Li B Xu . Mastering the task of open information extraction with large language models and consistent reasoning environment. 2023, arXiv preprint arXiv: 2310.10590
107
G, Li P, Wang J, Liu Y, Guo K, Ji Z, Shang Z Xu . Meta in-context learning makes large language models better zero and few-shot relation extractors. In: Proceedings of the 33rd International Joint Conference on Artificial Intelligence. 2024
108
W, Otto S, Upadhyaya S Dietze . Enhancing software-related information extraction via single-choice question answering with large language models. 2024, arXiv preprint arXiv: 2404.05587
109
Z, Shi H Luo . CRE-LLM: a domain-specific Chinese relation extraction framework with fine-tuned large language model. 2024, arXiv preprint arXiv: 2404.18085
110
G, Li P, Wang W, Ke Y, Guo K, Ji Z, Shang J, Liu Z Xu . Recall, retrieve and reason: towards better in-context relation extraction. In: Proceedings of the 33rd International Joint Conference on Artificial Intelligence. 2024
111
G, Li Z, Xu Z, Shang J, Liu K, Ji Y Guo . Empirical analysis of dialogue relation extraction with large language models. In: Proceedings of the 33rd International Joint Conference on Artificial Intelligence. 2024
Li Y, Peng X, Li J, Zuo X, Peng S, Pei D, Tao C, Xu H, Hong N. Relation extraction using large language models: a case study on acupuncture point locations. Journal of the American Medical Informatics Association, 2024: ocae233
114
Z, Fan S He . Efficient data learning for open information extraction with pre-trained language models. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 13056−13063
115
A S, Kwak C, Jeong G, Forte D, Bambauer C, Morrison M Surdeanu . Information extraction from legal wills: how well does GPT-4 do? In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 4336−4353
116
Q, Sun K, Huang X, Yang R, Tong K, Zhang S Poria . Consistency guided knowledge retrieval and denoising in LLMs for zero-shot document-level relation triplet extraction. In: Proceedings of the ACM Web Conference 2024. 2024, 4407−4416
117
Y, Ozyurt S, Feuerriegel C Zhang . In-context few-shot relation extraction via pre-trained language models. 2023, arXiv preprint arXiv: 2310.11085
118
L, Xue D, Zhang Y, Dong J Tang . AutoRE: document-level relation extraction with large language models. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024, 211−220
119
Y, Liu X, Peng T, Du J, Yin W, Liu X Zhang . ERA-CoT: improving chain-of-thought through entity relationship analysis. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024, 8780−8794
120
G, Li W, Ke P, Wang Z, Xu K, Ji J, Liu Z, Shang Q Luo . Unlocking instructive in-context learning with tabular prompting for relational triple extraction. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024
121
Z, Ding W, Huang J, Liang Y, Xiao D Yang . Improving recall of large language models: a model collaboration approach for relational triple extraction. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024, 8890−8901
122
X, Ni P, Li H Li . Unified text structuralization with instruction-tuned language models. 2023, arXiv preprint arXiv: 2303.14956
123
U, Zaratiana N, Tomeh P, Holat T Charnois . An autoregressive text-to-graph framework for joint entity and relation extraction. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. 2024, 19477−19487
124
L, Peng Z, Wang F, Yao Z, Wang J Shang . MetaIE: distilling a meta model from LLM for all kinds of information extraction tasks. 2024, arXiv preprint arXiv: 2404.00457
125
J, Atuhurra S C, Dujohn H, Kamigaito H, Shindo T Watanabe . Distilling named entity recognition models for endangered species from large language models. 2024, arXiv preprint arXiv: 2403.15430
126
X, Tang Q, Su J, Wang Z Deng . CHisIEC: an information extraction corpus for ancient Chinese history. In: Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024, 3192−3202
127
Veyseh A P, Ben V, Lai F, Dernoncourt T H Nguyen . Unleash GPT-2 power for event detection. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. 2021, 6271−6282
128
N, Xia H, Yu Y, Wang J, Xuan X Luo . DAFS: a domain aware few shot generative model for event detection. Machine Learning, 2023, 112( 3): 1011–1031
129
Z, Cai P N, Kung A, Suvarna M, Ma H, Bansal B, Chang P J, Brantingham W, Wang N Peng . Improving event definition following for zero-shot event detection. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024
130
S, Li H, Ji J Han . Document-level event argument extraction by conditional generation. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021, 894−908
131
Y, Lu H, Lin J, Xu X, Han J, Tang A, Li L, Sun M, Liao S Chen . Text2Event: controllable sequence-to-structure generation for end-to-end event extraction. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. 2021, 2795−2806
132
Y, Zhou T, Shen X, Geng G, Long D Jiang . ClarET: pre-training a correlation-aware context-to-event transformer for event-centric generation and classification. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022, 2559−2575
133
K H, Huang I, Hsu P, Natarajan K W, Chang N Peng . Multilingual generative language models for zero-shot cross-lingual event argument extraction. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022, 4633−4646
134
Y, Ma Z, Wang Y, Cao M, Li M, Chen K, Wang J Shao . Prompt for extraction? PAIE: prompting argument interaction for event argument extraction. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022, 6759−6774
135
X, Liu H, Huang G, Shi B Wang . Dynamic prefix-tuning for generative template-based event extraction. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022, 5216−5228
136
E, Cai B O’Connor . A Monte Carlo language model pipeline for zero-shot sociopolitical event extraction. In: Proceedings of the NeurIPS 2023. 2023
137
L, Luo Y Xu . Context-aware prompt for generation-based event argument extraction with diffusion models. In: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 2023, 1717−1725
138
D, Lu S, Ran J, Tetreault A Jaimes . Event extraction as question generation and answering. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 1666−1688
139
Nguyen C, van H, Man T H Nguyen . Contextualized soft prompts for extraction of event arguments. In: Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023. 2023, 4352−4361
140
I H, Hsu Z, Xie K, Huang P, Natarajan N Peng . AMPERE: AMR-aware prefix for generation-based event argument extraction model. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 10976−10993
141
J, Duan X, Liao Y, An J Wang . KeyEE: enhancing low-resource generative event extraction with auxiliary keyword sub-prompt. Big Data Mining and Analytics, 2024, 7( 2): 547–560
142
Z, Lin H, Zhang Y Song . Global constraints with prompting for zero-shot event argument classification. In: Proceedings of the Findings of the Association for Computational Linguistics: EACL 2023. 2023, 2482−2493
143
W, Liu L, Zhou D, Zeng Y, Xiao S, Cheng C, Zhang G, Lee M, Zhang W Chen . Beyond single-event extraction: towards efficient document-level multi-event argument extraction. In: Proceedings of the Findings of the Association for Computational Linguistics ACL 2024. 2024, 9470−9487
144
X F, Zhang C, Blum T, Choji S, Shah A Vempala . ULTRA: unleash LLMs’ potential for event argument extraction through hierarchical modeling and pair-wise self-refinement. In: Proceedings of the Findings of the Association for Computational Linguistics ACL 2024. 2024
145
Z, Sun G, Pergola B, Wallace Y He . Leveraging ChatGPT in pharmacovigilance event extraction: an empirical study. In: Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics. 2024, 344−357
146
H, Zhou J, Qian Z, Feng L, Hui Z, Zhu K Mao . LLMs learn task heuristics from demonstrations: a heuristic-driven prompting strategy for document-level event argument extraction. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024, 11972−11990
147
I H, Hsu K H, Huang E, Boschee S, Miller P, Natarajan K W, Chang N Peng . DEGREE: a data-efficient generation-based event extraction model. In: Proceedings of 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2022, 1890−1908
148
G, Zhao X, Gong X, Yang G, Dong S, Lu S Li . DemoSG: demonstration-enhanced schema-guided generation for low-resource event extraction. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 1805−1816
149
J, Gao H, Zhao W, Wang C, Yu R Xu . EventRL: enhancing event extraction with outcome supervision for large language models. 2024, arXiv preprint arXiv: 2402.11430
150
K H, Huang I H, Hsu T, Parekh Z, Xie Z, Zhang P, Natarajan K W, Chang N, Peng H Ji . TextEE: benchmark, reevaluation, reflections, and future challenges in event extraction. In: Proceedings of the Findings of the Association for Computational Linguistics ACL 2024. 2024, 12804−12825
151
C, Wang X, Liu Z, Chen H, Hong J, Tang D Song . DeepStruct: pretraining of language models for structure prediction. In: Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022. 2022, 803−823
152
J, Li Y, Zhang B, Liang K F, Wong R Xu . Set learning for generative information extraction. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 13043−13052
153
X, Wei Y, Chen N, Cheng X, Cui J, Xu W Han . CollabKG: a learnable human-machine-cooperative information extraction toolkit for (event) knowledge graph construction. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024
154
J, Wang Y, Chang Z, Li N, An Q, Ma L, Hei H, Luo Y, Lu F Ren . TechGPT-2.0: a large language model project to solve the task of knowledge graph construction. 2024, arXiv preprint arXiv: 2401.04507
155
X, Xiao Y, Wang N, Xu Y, Wang H, Yang M, Wang Y, Luo L, Wang W, Mao D Zeng . YAYI-UIE: a chat-enhanced instruction tuning framework for universal information extraction. 2023, arXiv preprint arXiv: 2312.15548
156
J, Xu M, Sun Z, Zhang J Zhou . ChatUIE: exploring chat-based unified information extraction using large language models. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024, 3146−3152
157
H, Gui L, Yuan H, Ye N, Zhang M, Sun L, Liang H Chen . IEPile: unearthing large scale schema-conditioned information extraction corpus. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024, 127−146
158
Q, Guo Y, Guo J Zhao . Diluie: constructing diverse demonstrations of in-context learning with large language model for unified information extraction. Neural Computing and Applications, 2024, 36( 22): 13491–13512
159
Z, Bi J, Chen Y, Jiang F, Xiong W, Guo H, Chen N Zhang . CodeKGC: code language model for generative knowledge graph construction. ACM Transactions on Asian and Low-Resource Language Information Processing, 2024, 23( 3): 45
160
Z, Li Y, Zeng Y, Zuo W, Ren W, Liu M, Su Y, Guo Y, Liu L, Lixiang Z, Hu L, Bai W, Li Y, Liu P, Yang X, Jin J, Guo X Cheng . KnowCoder: coding structured knowledge into LLMs for universal information extraction. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024, 8758−8779
161
Li J, Jia Z, Zheng Z. Semi-automatic data enhancement for document-level relation extraction with distant supervision from large language models. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 5495−5505
162
Tang R, Han X, Jiang X, Hu X. Does synthetic data generation of LLMs help clinical text mining? 2023, arXiv preprint arXiv: 2303.04360
163
S, Meoni la Clergerie E, De T Ryffel . Large language models as instructors: a study on multilingual clinical entity extraction. In: Proceedings of the 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks. 2023, 178−190
164
Evans J, Sadruddin S, D’Souza J. Astro-NER–astronomy named entity recognition: is GPT a good domain expert annotator? 2024, arXiv preprint arXiv: 2405.02602
165
Y, Naraki R, Yamaki Y, Ikeda T, Horie H Naganuma . Augmenting NER datasets with LLMs: towards automated and refined annotation. 2024, arXiv preprint arXiv: 2404.01334
166
F, Chen Y Feng . Chain-of-thought prompt distillation for multimodal named entity recognition and multimodal relation extraction. 2023, arXiv preprint arXiv: 2306.14122
167
J, Li H, Li Z, Pan D, Sun J, Wang W, Zhang G Pan . Prompting ChatGPT in MNER: enhanced multimodal named entity recognition with auxiliary refined knowledge. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 2787−2802
168
M, Josifoski M, Sakota M, Peyrard R West . Exploiting asymmetry for synthetic training data generation: synthIE and the case of information extraction. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 1555−1574
169
S, Wadhwa S, Amir B Wallace . Revisiting relation extraction in the era of large language models. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 15566−15589
170
C, Yuan Q, Xie S Ananiadou . Zero-shot temporal relation extraction with ChatGPT. In: Proceedings of the 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks. 2023, 92−102
171
J, Bian J, Zheng Y, Zhang S Zhu . Inspire the large language model by external knowledge on biomedical named entity recognition. 2023, arXiv preprint arXiv: 2309.12278
172
Y, Hu Q, Chen J, Du X, Peng V K, Keloth X, Zuo Y, Zhou Z, Li X, Jiang Z, Lu K, Roberts H Xu . Improving large language models for clinical named entity recognition via prompt engineering. Journal of the American Medical Informatics Association, 2024, 31( 9): 1812–1820
173
W, Shao R, Zhang P, Ji D, Fan Y, Hu X, Yan C, Cui Y, Tao L, Mi L Chen . Astronomical knowledge entity extraction in astrophysics journal articles via large language models. Research in Astronomy and Astrophysics, 2024, 24( 6): 065012
174
S, Geng M, Josifosky M, Peyrard R West . Flexible grammar-based constrained decoding for language models. 2023, arXiv preprint arXiv: 2305.13971
175
T, Liu Y E, Jiang N, Monath R, Cotterell M Sachan . Autoregressive structured prediction with language models. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022. 2022, 993−1005
176
X, Chen L, Li S, Deng C, Tan C, Xu F, Huang L, Si H, Chen N Zhang . LightNER: a lightweight tuning paradigm for low-resource NER via pluggable prompting. In: Proceedings of the 29th International Conference on Computational Linguistics. 2022, 2374−2387
177
Nie B, Shao Y, Wang Y. Know-adapter: towards knowledge-aware parameter-efficient transfer learning for few-shot named entity recognition. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024, 9777−9786
178
J, Zhang X, Liu X, Lai Y, Gao S, Wang Y, Hu Y Lin . 2INER: instructive and in-context learning on few-shot named entity recognition. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 3940−3951
179
M, Monajatipoor J, Yang J, Stremmel M, Emami F, Mohaghegh M, Rouhsedaghat K W Chang . LLMs in biomedicine: a study on clinical named entity recognition. 2024, arXiv preprint arXiv: 2404.07376
180
A, Dunn J, Dagdelen N, Walker S, Lee A S, Rosen G, Ceder K, Persson A Jain . Structured information extraction from complex scientific text with fine-tuned large language models. 2022, arXiv preprint arXiv: 2212.05238
181
J, Cheung Y, Zhuang Y, Li P, Shetty W, Zhao S, Grampurohit R, Ramprasad C Zhang . POLYIE: a dataset of information extraction from polymer material scientific literature. In: Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2024
182
J, Dagdelen A, Dunn S, Lee N, Walker A S, Rosen G, Ceder K A, Persson A Jain . Structured information extraction from scientific text with large language models. Nature Communications, 2024, 15( 1): 1418
183
M D, Ma A, Taylor W, Wang N Peng . DICE: data-efficient clinical event extraction with generative models. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 15898−15917
184
Y, Hu I, Ameer X, Zuo X, Peng Y, Zhou Z, Li Y, Li J, Li X, Jiang H Xu . Zero-shot clinical entity recognition using ChatGPT. 2023, arXiv preprint arXiv: 2303.16416
185
M, Agrawal S, Hegselmann H, Lang Y, Kim D Sontag . Large language models are few-shot clinical information extractors. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 1998−2022
186
Labrak Y, Rouvier M, Dufour R. A zero-shot and few-shot study of instruction-finetuned large language models applied to clinical and biomedical tasks. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024
187
B J, Gutiérrez N, McNeal C, Washington Y, Chen L, Li H, Sun Y Su . Thinking about GPT-3 in-context learning for biomedical IE? Think again. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022. 2022, 4497−4512
188
J, Biana W, Zhai X, Huang J, Zheng S Zhu . VANER: leveraging large language model for versatile and adaptive biomedical named entity recognition. 2024, arXiv preprint arXiv: 2404.17835
189
C E, González-Gallardo E, Boros N, Girdhar A, Hamdi J G, Moreno A Doucet . yes but.. can ChatGPT identify entities in historical documents? In: Proceedings of 2023 ACM/IEEE Joint Conference on Digital Libraries. 2023, 184−189
190
T, Xie Q, Li J, Zhang Y, Zhang Z, Liu H Wang . Empirical study of zero-shot NER with ChatGPT. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 7935−7956
191
J, Gao H, Zhao C, Yu R Xu . Exploring the feasibility of ChatGPT for event extraction. 2023, arXiv preprint arXiv: 2303.03836
192
H, Gui J, Zhang H, Ye N Zhang . InstructIE: a Chinese instruction-based information extraction dataset. 2023, arXiv preprint arXiv: 2305.11527
193
R, Han T, Peng C, Yang B, Wang L, Liu X Wan . Is information extraction solved by ChatGPT? an analysis of performance, evaluation criteria, robustness and errors. 2023, arXiv preprint arXiv: 2305.14450
194
U, Katz M, Vetzler A, Cohen Y Goldberg . NERetrieve: dataset for next generation named entity recognition and retrieval. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 3340−3354
195
B, Li G, Fang Y, Yang Q, Wang W, Ye W, Zhao S Zhang . Evaluating ChatGPT’s information extraction capabilities: an assessment of performance, explainability, calibration, and faithfulness. 2023, arXiv preprint arXiv: 2304.11633
196
H, Fei M, Zhang M, Zhang T S Chua . XNLP: an interactive demonstration system for universal structured NLP. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024
197
C, Liu F, Zhao Y, Kang J, Zhang X, Zhou C, Sun K, Kuang F Wu . RexUIE: a recursive method with explicit schema instructor for universal information extraction. In: Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023. 2023, 15342−15359
198
T, Zhu J, Ren Z, Yu M, Wu G, Zhang X, Qu W, Chen Z, Wang B, Huai M Zhang . Mirror: a universal framework for various information extraction tasks. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023, 8861−8876
199
C, Bhagavatula Bras R, Le C, Malaviya K, Sakaguchi A, Holtzman H, Rashkin D, Downey S W T, Yih Y Choi . Abductive commonsense reasoning. In: Proceedings of the 8th International Conference on Learning Representations. 2020
200
OpenAI. Introduce ChatGPT. See openai.com/index/chatgpt/ website, 2023
201
C, Whitehouse M, Choudhury A F Aji . LLM-powered data augmentation for enhanced cross-lingual performance. In: Proceedings of 2023 Conference on Empirical Methods in Natural Language Processing. 2023
202
L, Wu Z, Zheng Z, Qiu H, Wang H, Gu T, Shen C, Qin C, Zhu H, Zhu Q, Liu H, Xiong E Chen . A survey on large language models for recommendation. World Wide Web, 2024, 27( 5): 60
203
Y, Chen Q, Wang S, Wu Y, Gao T, Xu Y Hu . TOMGPT: reliable text-only training approach for cost-effective multi-modal large language model. ACM Transactions on Knowledge Discovery from Data, 2024, 18( 7): 171
204
P, Luo T, Xu C, Liu S, Zhang L, Xu M, Li E Chen . Bridging gaps in content and knowledge for multimodal entity linking. In: Proceedings of the ACM Multimedia 2024. 2024
205
H, Yang X, Zhao S, Huang Q, Li G Xu . LATEX-GCL: large language models (LLMs)-based data augmentation for text-attributed graph contrastive learning. 2024, arXiv preprint arXiv: 2409.01145
206
Y, Gao Y, Xiong X, Gao K, Jia J, Pan Y, Bi Y, Dai J, Sun M, Wang H Wang . Retrieval-augmented generation for large language models: a survey. 2023, arXiv preprint arXiv: 2312.10997
207
L, Gao S, Biderman S, Black L, Golding T, Hoppe C, Foster J, Phang H, He A, Thite N, Nabeshima S, Presser C Leahy . The pile: an 800GB dataset of diverse text for language modeling. 2020, arXiv preprint arXiv: 2101.00027
208
G, Marvin N, Hellen D, Jjingo J Nakatumba-Nabende . Prompt engineering in large language models. In: Jacob I J, Piramuthu S, Falkowski-Gilski P. Data Intelligence and Cognitive Informatics. Singapore: Springer, 2024, 387−402
209
H, Zhao S, Zheng L, Wu B, Yu J Wang . LANE: logic alignment of non-tuning large language models and online recommendation systems for explainable reason generation. 2024, arXiv preprint arXiv: 2407.02833
210
Z, Zheng Z, Qiu X, Hu L, Wu H, Zhu H Xiong . Generative job recommendations with large language model. 2023, arXiv preprint arXiv: 2307.02157
211
L, Wu Z, Qiu Z, Zheng H, Zhu E Chen . Exploring large language model for graph data understanding in online job recommendations. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. 2024, 9178−9186
212
Z, Zheng W, Chao Z, Qiu H, Zhu H Xiong . Harnessing large language models for text-rich sequential recommendation. In: Proceedings of the ACM Web Conference 2024. 2024, 3207−3216
213
B, Chen Z, Zhang N, Langrené S Zhu . Unleashing the potential of prompt engineering in large language models: a comprehensive review. 2023, arXiv preprint arXiv: 2310.14735
214
Z, Zhao F, Lin X, Zhu Z, Zheng T, Xu S, Shen X, Li Z, Yin E Chen . DynLLM: when large language models meet dynamic graph recommendation. 2024, arXiv preprint arXiv: 2405.07580
215
J, Wang E, Shi S, Yu Z, Wu C, Ma H, Dai Q, Yang Y, Kang J, Wu H, Hu C, Yue H, Zhang Y, Liu Y, Pan Z, Liu L, Sun X, Li B, Ge X, Jiang D, Zhu Y, Yuan D, Shen T, Liu S Zhang . Prompt engineering for healthcare: methodologies and applications. 2023, arXiv preprint arXiv: 2304.14670
216
D, Xu Z, Zhang Z, Lin X, Wu Z, Zhu T, Xu X, Zhao Y, Zheng E Chen . Multi-perspective improvement of knowledge graph completion with large language models. In: Proceedings of 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. 2024, 11956−11968
217
X, Li J, Zhou W, Chen D, Xu T, Xu E Chen . Visualization recommendation with prompt-based reprogramming of large language models. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. 2024, 13250−13262
218
Liu C, Xie Z, Zhao S, Zhou J, Xu T, Li M, Chen E. Speak from heart: an emotion-guided LLM-based multimodal method for emotional dialogue generation. In: Proceedings of 2024 International Conference on Multimedia Retrieval. 2024, 533−542
219
W, Peng D, Xu T, Xu J, Zhang E Chen . Are GPT embeddings useful for ads and recommendation? In: Proceedings of the 16th International Conference on Knowledge Science, Engineering and Management. 2023, 151−162
220
W, Peng J, Yi F, Wu S, Wu B B, Zhu L, Lyu B, Jiao T, Xu G, Sun X Xie . Are you copying my model? Protecting the copyright of large language models for EaaS via backdoor watermark. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 7653−7668
221
J, Wei X, Wang D, Schuurmans M, Bosma B, Ichter F, Xia E H, Chi Q V, Le D Zhou . Chain-of-thought prompting elicits reasoning in large language models. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 1800
222
Z, Chu J, Chen Q, Chen W, Yu T, He H, Wang W, Peng M, Liu B, Qin T Liu . A survey of chain of thought reasoning: advances, frontiers and future. 2023, arXiv preprint arXiv: 2309.15402
223
T, Kojima S S, Gu M, Reid Y, Matsuo IwasawaY. Large language models are zero-shot reasoners. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 22199−22213
224
S, Yin C, Fu S, Zhao K, Li X, Sun T, Xu E Chen . A survey on multimodal large language models. 2023, arXiv preprint arXiv: 2306.13549
225
B T, Willard R Louf . Efficient guided generation for large language models. 2023, arXiv preprint arXiv: 2307.09702
226
L, Beurer-Kellner M N, Müller M, Fischer M Vechev . Prompt sketching for large language models. 2023, arXiv preprint arXiv: 2311.04954
227
L, Zheng L, Yin Z, Xie J, Huang C, Sun C H, Yu S, Cao C, Kozyrakis I, Stoica J E, Gonzalez C, Barrett Y Sheng . Efficiently programming large language models using SGLang. 2023, arXiv preprint arXiv: 2312.07104
228
J, Huang C, Li K, Subudhi D, Jose S, Balakrishnan W, Chen B, Peng J, Gao J Han . Few-shot named entity recognition: an empirical baseline study. In: Proceedings of 2021 Conference on Empirical Methods in Natural Language Processing. 2021, 10408−10423
229
Z, Liu L, Wu M, He Z, Guan H, Zhao N Feng . Dr.E bridges graphs with large language models through words. 2024, arXiv preprint arXiv: 2406.15504
230
Z, Guan H, Zhao L, Wu M, He J Fan . LangTopo: aligning language descriptions of graphs with tokenized topological modeling. 2024, arXiv preprint arXiv: 2406.13250
231
R, Zha L, Zhang S, Li J, Zhou T, Xu H, Xiong E Chen . Scaling up multivariate time series pre-training with decoupled spatial-temporal representations. In: Proceedings of the 40th IEEE International Conference on Data Engineering. 2024, 667−678
232
L, Zhao Q, Liu L, Yue W, Chen L, Chen R, Sun C Song . COMI: COrrect and mitigate shortcut learning behavior in deep neural networks. In: Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2024, 218−228
233
F, Lin Z, Zhao X, Zhu D, Zhang S, Shen X, Li T, Xu S, Zhang E Chen . When box meets graph neural network in tag-aware recommendation. In: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024, 1770−1780
234
Q, Liu X, Wu X, Zhao Y, Zhu D, Xu F, Tian Y Zheng . When MOE meets LLMs: parameter efficient fine-tuning for multi-task medical applications. In: Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2024, 1104−1114
235
Q, Liu X, Wu X, Zhao Y, Zhu Z, Zhang F, Tian Y Zheng . Large language model distilling medication recommendation model. 2024, arXiv preprint arXiv: 2402.02803
236
Y, Wang Y, Wang Z, Fu X, Li X, Zhao H, Guo R Tang . LLM4MSR: an LLM-enhanced paradigm for multi-scenario recommendation. 2024, arXiv preprint arXiv: 2406.12529
237
Z, Zhao W, Fan J, Li Y, Liu X, Mei Y Q Wang . Recommender systems in the era of large language models (LLMs). IEEE Transactions on Knowledge and Data Engineering, 2024, 36( 11): 6889–6907
238
S, Qiao Y, Ou N, Zhang X, Chen Y, Yao S, Deng C, Tan F, Huang H Chen . Reasoning with language model prompting: a survey. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics. 2023, 5368−5393
239
Sang E F T K, De Meulder F. Introduction to the CoNLL-2003 shared task: language-independent named entity recognition. In: Proceedings of the 7th Conference on Natural Language Learning. 2003, 142−147
240
Roth D, Yih W T. A linear programming formulation for global inference in natural language tasks. In: Proceedings of the 8th Conference on Computational Natural Language Learning. 2004, 1−8
241
Walker C, Strassel S, Medero J, Maeda K. Ace 2005 multilingual training corpus-linguistic data consortium. See catalog.ldc.upenn.edu/LDC2006T06 website, 2005
242
G R, Doddington A, Mitchell M A, Przybocki L A, Ramshaw S M, Strassel R M Weischedel . The automatic content extraction (ACE) program - tasks, data, and evaluation. In: Proceedings of the 4th International Conference on Language Resources and Evaluation. 2004, 837−840
243
J, Li Y, Sun R J, Johnson D, Sciaky C H, Wei R, Leaman A P, Davis C J, Mattingly T C, Wiegers Z Lu . BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database, 2016, 2016: baw068
244
L, Derczynski K, Bontcheva I Roberts . Broad twitter corpus: a diverse named entity recognition resource. In: Proceedings of the 26th International Conference on Computational Linguistics. 2016, 1169−1179
245
S, Karimi A, Metke-Jimenez M, Kemp C Wang . CADEC: a corpus of adverse drug event annotations. Journal of Biomedical Informatics, 2015, 55: 73–81
246
Z, Wang J, Shang L, Liu L, Lu J, Liu J Han . CrossWeigh: training named entity tagger from imperfect annotations. In: Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019, 5153−5162
247
Z, Liu Y, Xu T, Yu W, Dai Z, Ji S, Cahyawijaya A, Madotto P Fung . CrossNER: evaluating cross-domain named entity recognition. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021, 13452−13460
248
A, Kumar B Starly . ”FabNER”: information extraction from manufacturing process science domain literature using named entity recognition. Journal of Intelligent Manufacturing, 2022, 33( 8): 2393–2407
249
N, Ding G, Xu Y, Chen X, Wang X, Han P, Xie H, Zheng Z Liu . Few-NERD: a few-shot named entity recognition dataset. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. 2021, 3198−3213
250
R, Guan K L, Man F, Chen S, Yao R, Hu X, Zhu J, Smith E G, Lim Y Yue . FindVehicle and VehicleFinder: a NER dataset for natural language-based vehicle retrieval and a keyword-based cross-modal vehicle retrieval system. Multimedia Tools and Applications, 2024, 83: 24841–24874
251
J D, Kim T, Ohta Y, Tateisi J Tsujii . GENIA corpus - a semantically annotated corpus for bio-textmining. Bioinformatics, 2003, 19( S1): i180–i182
252
Chen P, Xu H, Zhang C, Huang R. Crossroads, buildings and neighborhoods: a dataset for fine-grained location recognition. In: Proceedings of 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2022, 3329−3339
253
J, Liu P, Pasupat S, Cyphers J Glass . Asgard: a portable architecture for multilingual dialogue systems. In: Proceedings of 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. 2013, 8386−8390
254
S, Tedeschi R Navigli . MultiNERD: a multilingual, multi-genre and fine-grained dataset for named entity recognition (and disambiguation). In: Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2022. 2022, 801−812
255
R I, Doğan R, Leaman Z Lu . NCBI disease corpus: a resource for disease name recognition and concept normalization. Journal of Biomedical Informatics, 2014, 47: 1–10
256
Pradhan S, Moschitti A, Xue N, Ng H T, Björkelund A, Uryupina O, Zhang Y, Zhong Z. Towards robust linguistic analysis using OntoNotes. In: Proceedings of the 17th Conference on Computational Natural Language Learning. 2013, 143−152
257
S, Pradhan N, Elhadad B R, South D, Martínez L, Christensen A, Vogel H, Suominen W W, Chapman G Savova . Task 1: ShARe/CLEF eHealth evaluation lab 2013. In: Proceedings of the Working Notes for CLEF 2013 Conference. 2013
258
D L, Mowery S, Velupillai B R, South L, Christensen D, Martínez L, Kelly L, Goeuriot N, Elhadad S, Pradhan G, Savova W W Chapman . Task 2: ShARe/CLEF eHealth evaluation lab 2014. In: Proceedings of the Working Notes for CLEF 2014 Conference. 2014, 31−42
259
D, Lu L, Neves V, Carvalho N, Zhang H Ji . Visual attention model for name tagging in multimodal social media. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018, 1990−1999
260
S, Rijhwani D Preotiuc-Pietro . Temporally-informed analysis of named entity recognition. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020, 7605−7617
261
H, Jiang Y, Hua D, Beeferman D Roy . Annotating the tweebank corpus on named entity recognition and building NLP models for social media analysis. In: Proceedings of the 13th Language Resources and Evaluation Conference. 2022, 7199−7208
262
Q, Zhang J, Fu X, Liu X Huang . Adaptive co-attention network for named entity recognition in tweets. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018, 5674−5681
263
A, Ushio F, Barbieri V, Silva L, Neves J Camacho-Collados . Named entity recognition in twitter: a dataset and analysis on short-term temporal shifts. In: Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing. 2022, 309−319
264
X, Wang J, Tian M, Gui Z, Li R, Wang M, Yan L, Chen Y Xiao . WikiDiverse: a multimodal entity linking dataset with diversified contextual topics and entity types. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022, 4785−4797
265
L, Derczynski E, Nichols Erp M, van N Limsopatham . Results of the WNUT2017 shared task on novel and emerging entity recognition. In: Proceedings of the 3rd Workshop on Noisy User-Generated Text. 2017, 140−147
266
H, Gurulingappa A M, Rajput A, Roberts J, Fluck M, Hofmann-Apitius L Toldo . Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. Journal of Biomedical Informatics, 2012, 45( 5): 885–892
267
Y, Yao D, Ye P, Li X, Han Y, Lin Z, Liu Z, Liu L, Huang J, Zhou M Sun . DocRED: a large-scale document-level relation extraction dataset. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019, 764−777
268
C, Zheng Z, Wu J, Feng Z, Fu Y Cai . MNRE: a challenge multimodal dataset for neural relation extraction with visual evidence in social media posts. In: Proceedings of 2021 IEEE International Conference on Multimedia and Expo. 2021, 1−6
269
S, Riedel L, Yao A McCallum . Modeling relations and their mentions without labeled text. In: Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases. 2010, 148−163
270
Stoica G, Platanios E A, Poczos B. Re-TACRED: addressing shortcomings of the TACRED dataset. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021, 13843−13850
271
Luan Y, He L, Ostendorf M, Hajishirzi H. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In: Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing. 2018, 3219−3232
272
I, Hendrickx S N, Kim Z, Kozareva , NakovPD Ó, Séaghdha S, Padó M, Pennacchiotti L, Romano S Szpakowicz . SemEval-2010 task 8: multi-way classification of semantic relations between pairs of nominals. In: Proceedings of the 5th International Workshop on Semantic Evaluation. 2010, 33−38
273
Zhang Y, Zhong V, Chen D, Angeli G, Manning C D. Position-aware attention and supervised data improve slot filling. In: Proceedings of 2017 Conference on Empirical Methods in Natural Language Processing. 2017, 35−45
274
C, Alt A, Gabryszak L Hennig . TACRED revisited: a thorough evaluation of the TACRED relation extraction task. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020, 1558−1569
275
T, Satyapanich F, Ferraro T Finin . CASIE: extracting cybersecurity event information from text. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. 2020, 8749−8757
276
J D, Kim Y, Wang T, Takagi A Yonezawa . Overview of genia event task in BioNLP shared task 2011. In: Proceedings of BioNLP Shared Task 2011 Workshop. 2011, 7−15
277
Kim J D, Wang Y, Yamamoto Y. The genia event extraction shared task, 2013 edition -overview. In: Proceedings of BioNLP Shared Task 2013 Workshop. 2013, 8−15
278
Z, Sun J, Li G, Pergola B, Wallace B, John N, Greene J, Kim Y He . PHEE: a dataset for pharmacovigilance event extraction from text. In: Proceedings of 2022 Conference on Empirical Methods in Natural Language Processing. 2022, 5571−5587
279
S, Ebner P, Xia R, Culkin K, Rawlins Durme B Van . Multi-sentence argument linking. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020, 8057−8077
280
Zamai A, Zugarini A, Rigutini L, Ernandes M, Maggini M. Show less, instruct more: Enriching prompts with definitions and guidelines for zero-shot ner. 2024, arXiv preprint arXiv:2407.01272
281
M, Lewis Y, Liu N, Goyal M, Ghazvininejad A, Mohamed O, Levy V, Stoyanov L Zettlemoyer . BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020, 7871−7880
282
C, Raffel N, Shazeer A, Roberts K, Lee S, Narang M, Matena Y, Zhou W, Li P J Liu . Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 2020, 21( 1): 140
283
Xue L, Constant N, Roberts A, Kale M, Al-Rfou R, Siddhant A, Barua A, Raffel C. mT5: a massively multilingual pre-trained text-to-text transformer. In: Proceedings of 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021, 483−498
284
H W, Chung L, Hou S, Longpre B, Zoph Y, Tay W, Fedus Y, Li X, Wang M, Dehghani S, Brahma A, Webson S S, Gu Z, Dai M, Suzgun X, Chen A, Chowdhery A, Castro-Ros M, Pellat K, Robinson D, Valter S, Narang G, Mishra A, Yu V, Zhao Y, Huang A, Dai H, Yu S, Petrov E H, Chi J, Dean J, Devlin A, Roberts D, Zhou Q V, Le J Wei . Scaling instruction-finetuned language models. 2022, arXiv preprint arXiv: 2210.11416
285
Z, Du Y, Qian X, Liu M, Ding J, Qiu Z, Yang J Tang . GLM: general language model pretraining with autoregressive blank infilling. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022, 320−335
286
H, Touvron T, Lavril G, Izacard X, Martinet M A, Lachaux T, Lacroix B, Rozière N, Goyal E, Hambro F, Azhar A, Rodriguez A, Joulin E, Grave G Lample . LLaMA: open and efficient foundation language models. 2023, arXiv preprint arXiv: 2302.13971
287
Taori R, Gulrajani I, Zhang T, Dubois Y, Li X. Stanford alpaca: An instruction-following llama model. See github. com/tatsu-lab/stanford_alpaca website, 2023
288
Chiang W L, Li Z, Lin Z, Sheng Y, Wu Z, Zhang H, Zheng L, Zhuang S, Zhuang Y, Gonzalez J E, Stoica I, Xing E P. Vicuna: an open-source chatbot impressing GPT-4 with 90%* ChatGPT quality. See vicuna. lmsys. org websit, 2023
289
T, Hugo M, Louis S, Kevin P, Albert A, Almahairi , et al.. Llama 2: open foundation and fine-tuned chat models. 2023, arXiv preprint arXiv: 2307.09288
290
B, Rozière J, Gehring F, Gloeckle S, Sootla I, Gat X E, Tan Y, Adi J, Liu R, Sauvestre T, Remez J, Rapin A, Kozhevnikov I, Evtimov J, Bitton M, Bhatt C, FerrerC A, Grattafiori W, Xiong A, Défossez J, Copet F, Azhar H, Touvron L, Martin N, Usunier T, Scialom G Synnaeve . Code llama: open foundation models for code. 2023, arXiv preprint arXiv: 2308.12950
291
A, Radford J, Wu R, Child D, Luan D, Amodei I Sutskever . Language models are unsupervised multitask learners. OpenAI Blog, 2019, 1( 8): 9
292
T B, Brown B, Mann N, Ryder M, Subbiah D, KaplanJ P, Dhariwal A, Neelakantan P, Shyam G, Sastry A, Askell S, Agarwal A, Herbert-Voss G, Krueger T, Henighan R, Child A, Ramesh D M, Ziegler J, Wu C, Winter C, Hesse M, Chen E, Sigler M, Litwin S, Gray B, Chess J, Clark C, Berner S, McCandlish A, Radford I, Sutskever D Amodei . Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 159
293
B Wang . Mesh-Transformer-JAX: model-parallel implementation of Transformer language model with JAX. See github.com/kingoflolz/mesh-transformer-jax website, 2021
294
L, Ouyang J, Wu X, Jiang D, Almeida C L, Wainwright P, Mishkin C, Zhang S, Agarwal K, Slama A, Ray J, Schulman J, Hilton F, Kelton L, Miller M, Simens A, Askell P, Welinder P, Christiano J, Leike R Lowe . Training language models to follow instructions with human feedback. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 27730−27744