NW-NLP 2018: Adverbial Clausal Modifiers in the LinGO Grammar Matrix

NW-NLP 2018: Adverbial Clausal Modifiers in the LinGO Grammar Matrix

Show Video

You. Hi. Hello, everyone, hi. I'm Elsa. Tacomas. I'm a researcher, here at the the LTC. Under. MS Rai and I'm. Gonna be turning the session this afternoon, are. We gonna have three. Awesome. Talks and. And. We're gonna start with Kristin and she's, going to be talking about. Adverbial. Causal, modifiers, in the lingo grammar matrix, so, take it away. Hi. Today, I'll be presenting, an extension, to the lingo grammar matrix, that'll. Allow for the automatic production, of precision, grammars that accommodate. Adverbial clause on modifiers, so. This is a little, bit less of an MLP talk so I'm gonna start, by giving you a background, on precision grammars, and the grammar matrix, in particular, and. Then I'll discuss the typology of clausal, modifiers in the world languages. I. Introduced. You to briefly to our syntactic, analysis and, then go through our development of, the library and. Our evaluation and, error analysis. So. Precision, grammars, are bi-directional grammars. That parse, and generate sentences. But. Their strength is that they produce very precise. Syntactic, and semantic representations. And. In our case the syntactic representations. Are in the HT SG formalism, and the, semantics, are using minimal recursion semantics. The. Most important thing about these grammars is that they prioritize, precision. Meaning, we care about what, proportion, of our parses, are the right parse. Rather. Than being as concerned with our overall coverage. Uses. For these grammars include, include. Building. Grammars for languages. Where we don't necessarily have, enough data. For traditional NLP, methods, but. We do have linguistic information, that, we can utilize. Linguists. Use these grammars for hypothesis. Testing checking. Their. Analyses. Over a corpus. And. These are also useful for finding out what's actually in, your corpus, when. You run, your corpus through one of these grammars you might find a phenomenon. That you didn't, realize was there or. You might actually find, some inconsistencies. Either, your data or in your annotation. However. These, grammars are very, expensive. And time-consuming to, create so. The lingo grammar matrix, is a starter, kit for. Creating, HP SG grammars. Basically. It elicits, information, from the user about the language, and. Then produces, grammars from that so, a quick example, this. Is the word order page on the, customization. Screen, and. It asks the user what's, the basic word order of the language is this subject verb object subject, object verb and, so forth, do, the determiners, come before or after the noun do. Auxiliaries come before or after their verbal complements, and. Then based, on the user's input on this page and. Analyses. Developed. Based on the type of literature. A precision. Grammar is output, that, can now accommodate, that phenomenon.

Right. Now the grammar matrix includes a wide variety of phenomenon, including, word order morphology. Tense. And aspect but. Previously, it did not accommodate, subordinate, clauses of any kind. So. Today, we're presenting a library for adverbial. Clause all modifiers. These. Are your subordinate, clauses, that. Present, information like. Time. Purpose. Your basic while because, type, clauses, and. In the world languages, these. Manifests, in a number of ways but, follow three basic patterns. Typically. Clausal, modifiers are marked by a free, subordinating. Morpheme and I'll just use the word subordinator, moving forward, and. This is a word like win or tow. In the Japanese example, at the top I, can. Point to that, another. Pattern is subordinator. Pairs and we see these very robustly, in Mandarin, such. As the yin. Wei suoi pair where. You, have this because. Subordinator. And the subordinate, clause but, then in the matrix clause you have to have this additional adverb, these. Are also really common in if-then, clauses cross-linguistically. If. It's easier to think of an English example. And. Finally sometimes. Subordinate. Clauses are marked just, with verbal morphology. So the purpose of suffix, right here in, this example from Luis sanyo marks. That subordinate, clause as a purposes, purpose, of clause. In. Addition, to these basic patterns we also, see, a very wide. Range of, variation. Within. These clauses and any given language will. Probably be marked by a handful, of. Different strategies, that. Utilize these characteristics. So. These include the position, of the clausal modifier whether it's before or after the main clause. The. Whether. Or not there's a pair whether. There's additional verbal, morphology us, you might have a subordinator, and then also a non-finite, verb and the subordinate, clause. Sometimes. The clause is nominal, eyes or, the subject is shared, so. It's unexpressed, in the subordinate, clause, but. Expressed in the main clause so this is an example like, I. Went to the store to buy bread where I is the subject of, both classes but unexpressed, in the purpose of clause.

So. In. Order to. Account. For the space and analyze this within the grammar matrix, we. Introduced, two. Lexical, types and two unary rules an abstract, away from the nitty-gritty. HPS G on this slide. Basically. What we've done is we've created four super, types that. Account. For the basics, of each strategy, whether it's a subordinator. Or a morphological subordination. And. Then. On to these super, types we can add new features in the subtypes, so. To. Account for all of these characteristics on this slide. We, add feature, definitions. That. We can put into the, sub types for each subordinate, clause strategy. So. For example we. Use the post head feature which is just a boolean feature to. Constrain, whether, the clausal modifier goes, before, or after, the main clause and. If it's free to, go in either, place then we leave that under specified. In. Our development we use two, sets of development, data. Sorry about that a. Quick, example. Of how we map. That. Map. Those features onto the customization. Page for the user is. That we use an iterator. That. Allows the user to define as many clausal modifier strategies, as they want so they can have the purpose of strategy, that just uses. The drop subject, in the non-finite and to buy bread and they, can use a different strategy for a because. Clause which, is marked by a subordinator. And. They're free to fill in as many strategies as they need. So. When developing we use two sets of development, data our. First stage is pseudo languages, these are. Languages. With a minimal, lexicon, that are designed to capture the typological, space so. We'll just use a simple language. Where. The nouns are called noun 1 noun 2 and the verbs are called verb 1 verb 2, and. Then we capture the type logical space as best we can, by. Pairing, each subordinator. Type, with. Each feature, so. Going back to the previous slide, we. Have all of these feature values that are possible, on the subordinator, types and, we'll. Test this. Adposition. Subordinate, or lexical type with, one of the posted features but. If we were to actually test, all of them we'd have a thousand. And eight test languages, and, that would be really cumbersome because these aren't just test instances, these are full languages. So. The end of this stage we, expect full coverage right, we should be able to capture. The typological space by accounting, for every, possible, combination. Of features. However. We don't test each feature value, just every feature type. After. That our. Second, development set is illustrative. Languages, and at this point we actually get. Involved with natural language. It's. A bit, Messier because we're testing interactions, that actually occur in real languages, such. As subject, dropping, in the language various, word order patterns, and. These interacting, phenomenon can cause complications that, weren't previously tested, in the pseudo language evaluation, or pseudo. Language testing. So. We. Used for, real languages, that, were selected. Specifically. To, illustrate, a particular phenomenon. So, we used Mandarin, to illustrate the subordinator, pair that. I discussed, previously. Roo. Chi requires, that subordinate, clauses be nominal, iced. German. Is, a v2 language, verbs, second language that, requires verb final, word order and subordinate, clauses so. We tested that and. Then Wan bhaiyya is a morph. Language. That marks it's subordinate clauses morphologically. We. Tested that as well now at this stage we, discovered. One phenomenon, that's not captured, in our library and probably. Shouldn't be and this, is that subordinate, clauses actually, have a different case frame. On. The objects. In. Those clauses so. A nominative. Accusative language. Will actually, have. Dative. Case in, the, subordinate.

Clause So. We decided that this should actually be treated. Separately, as. A valence, change in operation, in another, library that, we set aside for future work so. We don't expect full coverage here as. Those. Examples aren't expected, to parse. So. Finally we move on to evaluation, and at, this stage we, test. The type of logical. Diversity. And reliability, of. Our system, by. Testing on five held-out languages, this is not just held-out data but, these are languages, that we did not consider at all during, development and. The way we construct, our test Suites and grammars is by reading the descriptive, grammars written by failed linguists. That. Have. A lot of very detailed data. And prose descriptions, and. We develop our test Suites from that these. Languages were. Selected. In a random set a random. Manner but, required, to be genetically in geographically, diverse so. If I polled two. Indo-european. Languages off the shelf I would put them back. So. That we represent, five distinct language families here. We, achieved 88%, coverage, and 2%. Over generation, at this, point and that 2%, over generation, comes from just one. Example, parsing, and. That really shows, our fidelity to precision here. As. For our error analysis, this hack up this actually tells us quite a bit about, the library, and about the typology of clause and modifiers. So. I've broken this into two categories in scope phenomena and out of scope phenomena, we. Had one. Sentence, fail to parse in most attend, due. To a constraint, in the supertype that. Shouldn't have been there should have been added in the subtype so. That's a bug and it's fixed. Next. We found a bug in the nominal, eyes clauses, library where, nominal eyes clauses. Had. A special, head subject rule to allow for that attachment, but. Did not account for subject, dropping, meaning, a phenominal eyes clause didn't, have an overt subject, this, couldn't parse and it turns out that it's possible, in basque so. We created, a rule and. Inserted, that into the grammar to make sure that that sentence would parse if that rule was there. Next. We get to out of scope phenomena and this, is really the interesting part of error analysis, because. We see what didn't come up in the typological literature, that, turns out is out there in, the world's languages.

Our, First error came from the. Same case change issue that we saw in Wamba and. So I have an example for you here, in. This language. Third. Person, subjects, are marked. With accusative, case instead, of nominative, case in. The subordinate clause and so you see that here, so. This wasn't accounted for and, we had one sentence, that is grammatical, failed to parse and then one on grammatical. Sentence parse and that's what led to our two percent over generation. The. Next thing we found in. MIDI, is that. Subordinators. Actually, can attach to a verb, so, in the typological, literature, we saw the, subordinators. Attaching. Either to the sentence on the outside of the clause or, to. The verb phrase intervening. Between a subject and verb but. We never saw it intervening between the verb and its object and in fact in my D it's. Possible, in the sentence if Opie fear he should put it on the shelf, if, can, intervene, between bring and beer so this, is easy enough to add now that we know that. It's possible in the world languages and that'll be part of our future work and. Finally. We, found in Lavaca live a. That. The same pair adverb, phenomena, that we see in phenomenon where you have a or that you see in mandarin where you have a subordinator. In the clausal modifier, and an adverb in the main clause. Can. Occur where you have a morphologically. Marked subordinate, clause and you, still need an adverb in the main clause. So. There was only a brief description of this in the prose in one example so, what we would like to do is some further investigation, and the typological, literature as. Well as in that language before adding this functionality to the library just to make sure. So. What we've done today is we've introduced, a new library that now accommodates. One, type of subordinate, clause in the, grammar matrix, so that these, precision grammars can be automatically, produced, are. There any questions I. Don't. Know. Yeah. So the question is what do the grammars look like so, we're using the head, driven phrase structure grammar formalism. And. This. Is a lexically, driven. Syntactic. Framework and. We have a combination of lexical entries and free structure rules yeah. How. Much of English. Prose are you able to parts now what. Rosie is still missing and are you, up to Olive it's just some other languages you don't do um, that's a great question so the. Grammar matrix, actually grew, out of the English resource, grammar which. Covers. 96%. Of well-edited English. Text. Maybe. It's 94, so, that's, a very large grammar. We. Haven't, actually tested. This. On, English. Like the grammar matrix, outputs on English we are still missing a number of phenomena, a library, for a clausal compliments has just been added and. But. There's still more to be done we don't have relative clauses we don't have WH questions, so. We still wouldn't be, quite. Close to there but that work has been done on the individual, language level. Other. Questions. Yes. Oh. Yeah. In terms of phenomena, that are not yet in the grammar matrix we're. Still missing quite a bit so. What. I would recommend you do is go, to the grammar matrix page and see what's there and then. You. Should be able to figure out what's not but. There are lots of new phenomena, relative clauses I think, is a next direction. Adverbs, themselves, not, adverbial classes but just adverbs haven't been, implemented yet. Michael's going to be talking about simple. Questions. Nearly. Solved, in your unbounded, and baseline approach. It's. A joint work with Lukas, little more you take. Hi. There, my. Name is Michael and I'm presenting, I said. Simple. Questions nearly solved, this, was undergraduate, research that I did at. The Paul G, Allen school. Okay. To, give you first a brief overview over my talk. First, we're going to introduce. You to the widely used simple. Questions benchmark, and, then. We're. Going to go over the contributions, this paper is made on. To that benchmark first, talking. About that we, actually found that there's an upper bound on, this, benchmark and then, introducing, a simple, baseline, method, that. Actually gets state-of-the-art, results. And. Then we'll conclude that. We found that through our analysis, our upper bound it's loose and.

That The simple questions data is nearly solved. Okay. So, first a some, background what. Is a simple question, so, for example, who. Wrote Gulliver's, Travels as a simple question. The reason it's a simple questions because it has one relation, one, subject, and. A simple question stands, for a single, relation single. Subject factoid, question. Why. Are these questions interesting. To look at. Well they appear, often. Online. Specifically. Microsoft. Has, them a lot in their query logs we. See them popping up in weak answers, and they're. Commonly used that voice assistance. Yet. Although. They're very commonly, used the. Most popular, and largest benchmark, by magnitude. The. State-of-the-art results on it are only 77%. Okay. So a, little bit more background. The. Simple questions benchmark. It's, the largest simple. Questions. Data. Set with, 108. Thousand examples each. Example, has. A question that, is labeled with a subject, relation. And object and. It. Has. A back-end freebase, knowledge, graph which, is just the store, of facts. And. Then bringing up an example for that for. Example here's, the relation, and this is the freebase relation, for who wrote, in. This, context, and for, goal verse travels, the freebase object, has, an M ID and. An alias Gulliver's Travels. Okay. So the first contribution is introducing, an upper bound. So. The, example. And. What's, really driving this work is that there's. Fundamental, contradictions. And the simple questions data set, for. Example who wrote Gulliver's Travels is referring, to the TV miniseries Gulliver's. Travels while, named a character, from gobos travels, is referring, to her Culver, travels book. This. Shows that the linguistic signal. In these questions does not provide or provides. Equal evidence, for. It. Being an either miniseries. Or a book in both cases it's. Not clear. Ok. So how do we how. Do we quantify this and get. To an upper bound. So. A reminder. This. Is the data set this, is the data that we're dealing with so. The first thing we do is. We, take a look at the alias' come. Up with a list of all Gulliver's, Travels in freebase. Similarly. We. Take our Gulliver's Travels from the question and, we create an abstract, relation. And. In simple questions, who, wrote. Co-occurs. With book, written by Elmer. And buys film. Story by and other, relations. We. Can take a cross, product that these in, these. Are all possible. Interpretations. Of the question that we looked at before the. Some of them don't make for example, for, Gulliver's Travels a book you wouldn't ask you wouldn't have the relation, film. So. We just do, a filter on them on which, ones we actually find, in freebase and, from. That we, come up with a list of multiple, correct interpretations. Of who wrote Gulliver's Travels for. Which we, have equal linguistic. Evidence, for. Each one of these being correct yet. We only have one ground true for example which is number three. Okay. So our results we, find that on. This widely used benchmark. Thirty. Three point nine percent of, the questions are have multiple, correct interpretations. And therefore. Unanswerable. So. In computing the upper bound we, find that not. All subject relations are equal. For. Example, in this case we come, up with the relation film language but. We have an option between two. Subjects, this. This, one right here comes. Up four times in freebase while this one right here comes up in one and that, distribution. Allows. Us to set a majority. Upper bound, so. We've picked this up subject, to relations which happened most often in freebase and we find that the best we can do if we follow that simple. Baseline is. 83.4%. On this data set which. Is quite close to the 77%. State-of the-art. Okay. Then. The second contribution is. We're actually able to get, state-of-the-art, with just standard methods. With. Fine tuning. So. Our baseline model is going to consist of two neural models the first one is going to be top case subject recognition. Which, models the distribution. Of spans. Conditioned on the question. Secondly. We're going to have a relation, classification. Model. Conditioned. On the question and span, modeling. Relation. So. To. Better understand, those here is just an input, and output so. We want to we, want a, film, story by to have. The largest probability. In this input same. Up here. So. Why might we consider. Top K. Because. The first predicted. Span might not always be helpful, it might not always be in freebase so, we can do a little bit better by just filtering what is in freebase and what, is not. Some. More details. We. Use standard, CRF, tagger top.

We, Use Viterbi. Inference which is standard with a CRF. Tiger we, use i/o tagging, glove. And beddings and grid, search for hyper parameter, optimization. Similarly. Here, we. Use. A bioscan, with bachelor-man, softmax. We. Use a MS grad which, is the new version of atom published, recently a, fast, text, and hyper band for hyper parameter, optimization. So. The results. So. These are the, past results a lot, of research has been done on this and boards. That I'll introduce this paper or. Introduce. This data set. Oh. One. Sec to, give you a little bit of history here these. First these. First the first work done on this was primarily trying to do an end-to-end method so. Looking at one model that can both do relation, and subject classification. This. These. Next, two more. Recent, papers, we're. Looking at baseline approaches, of two models and then, these last three papers were all building custom. Architectures, with three models at times. For. Various reasons. So. Then applying the standard, approach and. Better. Understanding, the data set with. Fine tuning we're. Able to get 78.1%. Which. Is close to our upper bound of 83.4%. So. Finally, we want to understand what is, the difference here. What. What work is left to be done. So. In understanding, this we first considered, our. First contribution, and we, remove we, allow for all. Interpretations. To be correct. And. That, is. 13%. Of our air to. Start with then. We found there's a little bit of noise. Finally, for. The remaining air we found out that there's actually further ambiguity. So, our initial analysis, was very static. So. If there's any small. Changes in the. Sentence at this didn't catch that. Didn't. Catch that ambiguity. So. Empirical analysis show that our propound is actually loose due to this further ambiguity. And, we find that in. Real. Model. Airs, that. Most. Of the work left on this data set is actually, in low sharp relations, where, we have less than 10 examples. So. In. Concluding. Due. To the, -, the first time begin to be found and due to the further ambiguity, we found there's. Not much more than 4% performance. Left on this data set we believe. Secondly. We, believe it's largely solved on that, simple questions tasks is largely, solved. And. This. Is due to the fact that the remaining the. Remaining errors they're mostly due to a lack of data. Which. Is interesting a little bit, yeah. All. Right thank, you. We. Can take. It. So I, mean if the, upper bound is that poor. It, sounds like there's a good reason to say maybe we shouldn't just be returning, the top best answer. Right maybe it should actually be the best two or three if specially. This is to be used by it by. User I understand why, you'd want the top one to compare to other systems, but I'd be considered like maybe making an argument to, for.

Shared Tasks, like maybe we want to look at the top three best. Best, answers. Yeah. For future data sets it's, definitely important to acknowledge the, issue that sometimes, the. Top one answer, there could be multiple correct, interpretations. Yeah. I. Know. What questions you version. That you could have trained this end to end the. Relationship. Classifier, and the other classifier, can you elaborate on why you didn't, do that and if you had done it would, you have sold some of the issues that you were mentioning mm-hmm. We. Didn't. So. Fundamentally. The, purpose or, our, work was trying, to be baseline and standard. So. To integrate, both. Tagging. And. Classification. And one model, requires. More, than baseline. Work so. This is just the paper first. Setting the, baseline on this task before. Thinking about more custom, architectures. Do. You have any examples, that you can show us today. Of. Custom architectures, okay, I like. The questions. You. Know some, some, errors that your D'Amato made and. I. Don't. Know I. Think. You show that few earlier but oh that's. Fine if you don't have it that's why I was just are you talking about, these, are yeah yeah, okay, okay I. Can send them to ya no. Nice. For. Us to see you yeah you know what, kind of areas that. You're missing. Almost. Positives, maybe yeah. Yeah. Not. Really a perfect, simple questions archives because they're multiple answers to the questions that are listed there. Should. You fix the test set so. That is all simple questions. But, it's, already, all simple, questions. Just. These. Simple questions. Tend. To contradict, themselves with. Regards to which relation, they're, looking at for. Example if we're. Talking about the book Gulliver's, Travels we, might ask what is the language of Gulliver's Travels if, we're talking about the film Gulliver's, Travels we might, so ask what, is the language of Gulliver's, Travels and. It's not clear if you're talking about the book or film, unless. You integrate, more complexity, into that question, or. Yeah. That. Big us answer, it's. Still a simple question. Yeah. A simple question, can't have an ambiguous. Okay. So we have Yi from you drop again Yi long. Okay. Hi. Everybody I'm. Working with Mary and Hannah I'm. Going to present our recent work about a scientific, relation extraction, with selectively. Incorporated concept. Embeddings. So. The. Reason we are doing this work is that there are countless scientific, papers published, here and there. And. The tools that can manage those scientific knowledge are in, great demand. So. The. Task of the, the the goal, for this research is that we want to develop. In from information, extraction tools using, analog P approaches, that, can support. Scientific literature, so. Basically. We want to build like scientific knowledge, graph and also, we want to answer some specific scientific. Search queries so, for example if you ask the computer show, me the masses for Palace fish tagging then, the computer gave us can give us a list of recommendations. Of, messes, that can solve the problem such a garland syrup and hmm. There. Are basically two major steps towards. The. Task of scientific information extraction. So, the first one is scientific. Contact, except, extraction. Which is first, introduced, in seminal 2017.

And. So. The task will be to identify some, scientific first, bands and also classify, the first band into concept, categories, such, as method. Task and material, so, for example, the sentence. Conditional. Random fields, and named, entity recognition will, be extracted, and classified. As method, and task. But. The purpose of this paper would be, doing. The scientific. Relation extraction, which is to, classify the relations, between the. Pairs of scientific. Concepts. So. For example in this sentence we want to know that the relation between conditional, random field and name and here recognition. Is that one concept is used for the other what. So. The, task is that we want to given, the. Candidate. Concept, pairs we'll want to classify them, into scientific, relation, types. So. We introduce 5, asymmetric. Relations. Which is usage. Without. Model. Part of and topic, and 1, symmetric, relation which is comparison. So, the data set we use is similar 2018. Task 7 which, has 500. Annotated, scientific. Abstracts. From. Essay on astrology. The, main challenge of this task is that the. Annotation, is pretty hard which required, to make stupid expertise. So. This. Results, in like we only have limited training, data. So. The contribution of this research. Is that we, introduce state-of-the-art, neural relation extraction, model to, scientific. Domain and also. We propose a novel, concept selection. Layer to the neural structure to, mitigate the data scarcity. Problem, and we. Obtain the state-of-the-art result, in seminal 2018. Task 7. So. This is the. The. Overall, structure. Of our neural. Relation extraction model which. Has merely 4 components, so first given this how this, sequence. Of texts we, want to use the token sequential, layer to, represent, to, a representation, for, the sequence and then, the. Representation, will be fit into a concept, presentation, layer and dependency, sequence, layer and then. Those. Representation. Will be feeding into the final relation, classification. Layer and gave. Us the final output. First. The, total the, token sequence. Layer is, consists. Of bi-directional. Alice, TM which. Takes the input of word embedding and some featuring, matting's and. Then. The dependencies sequential layer is. Based. On this representation. We. Passed the sentence using some steps. Using. Some, dependency. Parser and and. Then, we extract. The. Shortest, path between the two concepts. And. Then. When. They come host the dependency, perhaps into.

Two Parts, we. First find the common ancestor, between the two, concepts. Which is the, word used in this case and then. We. Based. On the direction of the relation we will model forward. Dependency, path and backward dependency, path which, represents. Both. The concepts. The. Dependency, paths between the concept, and the. Common. Ancestor. So, we would fill in the head, word representation. Of each concept, and the. Common ancestor, representation. To the backward, and forward dependence. It has and then. We. Would use. The. Representation. Concatenate. Those, those. Tokens as a final. Representation. For the relation, and then, we can output, the final. Result. So. This. This. Dependency. Presentation. Itself can already give us pretty good result. But. Something. Is like missing, in this structure. That is. The. Representation for the concepts, this. Can model the whole span of the, phrases. The. Traditional. Way of doing this is, an. Example. In me want until 2016, which they, automatically. Learn. A weight arfa for each of the individual, token, in. The whole concept and then. There was some over the. Whole of, the. Tokens. Within the the concept span. The. Limitation, is dis approaches, that without, result. Introducing any prior. Knowledge of the concept, the. Automatically, learned representation. Will generalize poorly. In, the test time. So. In order to solve that problem we, propose a new approach which, is concept selection, so, we first construct. A concept. Dictionary, using the frequent, concepts. And then. Learner. Appreciating. Meaningful use of those concepts, and then given, a new concept. We. Will look up. Through. The concept, dictionary, and see if there is exact, match for, that phrase so, if there is is one exact. Match for the phrase but we'll use the pressure and embedding directly, as a concept. In writing but if there is an exact. Match where would decompose the. Phrase. Into. Some. Sub ngrams, so for example here, pranaam, enou say. The concept comes it candidate. Its predominant. And Afra resolution. But. There is no exact, match in the concept dictionary, so we would decompose. The. Whole, phrase into. Into. By grams and yoonah grams and then. We look up those programs, and yoonah, grams in the dictionary, and find some matches so here is the. Resulting, candidates. That we could find which is NFL, resolution, resolution and, NFL elbow. And then we will use these, candidates. Together with the MC with. A dummy vector. Dmt, to. Construct, a vector of candidate. Sets and then. In. Order to learn the final representation, we need to, select. Selectively. Combine. All those representations so. In our case, we use. Attention. Mechanism, to, learn, wait. Between. You. In each, of these. Embedding. And. The. The representation, in the sequential, layer. And. Then. We will sum over all the, ten candidates, to. Get the final reading. The. Other two to get the prior for his knowledge. We. Would first. Extract all the concepts, in large scales and if it covers using, silver the art scientific, term extractor, proposed. In. 2017. And then. We. Use word to back to, train, all the frequent concepts and then, construct, and, then construct this. Concept. Dictionary. The. Dirt that we use in this task is some, of our 2018. Task 7 which, has three subtasks. So the first subtasks is relation, classification. Which. We. Assume that concept, pairs and relation, deductions are given so, we need to choose between the, six defined, relations, and the, second task is relation, extraction.

We're Given, all the concepts, we need to identify if the relation exists, and this. Is a binary classification, task, and the, third task is a relation, extraction, and classification. Task and so. When is first to extract, those relations, and then, classify, them into six relations, together with their directions. So. The external data set we use between this concept, embedding as semantic. Scholar. Corpus. Which has. 110. KF, structs and the, idea, and sorry reference corpus which has honey to tape some. Papers. So. Here is the results, so, we can our, system, got the best result in, subtasks, 2.1. And. We. Also did a good, job in other two tasks so. Kim, wan did a little bit better than us but they use like more civilized, data. We'll also perform, ablation, studies by, removing, some of the components, of the, neural, network. So. Way. First sorry. So, we first remove the dependency, layer, which. Can give us fish. Wish. We, could see that the f1 squirt decrease greatly. And and. Then we removed this concept, there which. Also results, in a worse. Satellite. Worse f1, score and we. Also perform, the baseline method which is the way system of the tokens. And this. This might also like didn't, perform very well, so. The. Conclusion, is that the dependency, informant information, is very important, in this task because we, don't have a lot of data and some, external knowledge from. The dependency. Browser is very important, and also, the base by Method. Degrees. The performance. We. We, think, like this is because the a introduce. Some of redundant. Parameters. And. We. Also also find out that replace, the span resonation was selected, pre-training, the concept layer. Would significantly improve. The performance. Which. Can give us the best result of 44.1. So. The summary of this research, is that we introduce a new, neural. Relational. Model to scientific, domain and we, propose, a novel, concept selection. Layer to the neural structure that can leverage. Unlabeled. Data and, also. We obtain the stay of their result in several 2018. Touch, them. Yeah. So let's hit any questions. I. Have. One question, um. If. You go to the experiment. So. You've mentioned, the. T1. Having. A better result and you mentioned that they used more on they both are labeled days they both data okay. Yeah, so they so, there are, separated. In a sense for different, tasks, and they're combining littles data oh I see yeah because like you're showing that only the data actually really helps so I was just yeah. There they are using more civilized data can give me that so good yeah okay okay. Okay. Question. Okay. Yeah, thank, you.

2018-05-30 02:14

Show Video

Comments:

grammar.ai

Other news