Wikidata talk:Lexicographical data

From Wikidata
Jump to navigation Jump to search
Lexicographical data
Place used to discuss any and all aspects of lexicographical data: the project itself, policy and proposals, individual lexicographical items, technical issues, etc.
On this page, old discussions are archived. An overview of all archives can be found at this page's archive index. The current archive is located at 2024/07.


Multiple grammatical category lexemes

[edit]

How do we handle this ? For example jmdict lists

最多

in japanese as both an adjective and a noun, while en wiktionary lists it as a noun. I created both 最多/さいた (L1314021) (the most) and the antonym 最少/さいしょう (L1314076) but now I don't know how to handle both here. author  TomT0m / talk page 13:33, 20 March 2024 (UTC)[]

Why not make two entries, one for the noun "最多" and one for the adjective "最多"? It's a bit of work, though. There are about 2,000 words that can be both nouns and adjectives in japanese. Afaz (talk) 03:51, 23 March 2024 (UTC)[]
I was curious, so I decided to investigate. None of the major Japanese national language dictionaries—Daijirin (Q5209149), Daijisen (Q5209153), or Nihon Kokugo Daijiten (Q4093013)—classify "最多" as an adjectival noun (Q1091269). However, Nihon Kokugo Daijiten (Q4093013) is the only one that does classify "最少" as an adjectival noun (Q1091269). Afaz (talk) 10:50, 23 March 2024 (UTC)[]
Interesting, I sometime wonders about the quality of western resources like jmdict, but as a non native reader and learner it's still hard to read native one for me :/ author  TomT0m / talk page 11:21, 23 March 2024 (UTC)[]
"adj-no" has been discussed a lot in JMdict. The 2023-10-14 post here is particularly interesting and given what was said there, "adj-no" should probably be ignored for words tagged as both "n" and "adj-no". Nouns in Japanese can normally be used adjectivally by adding の, so in general I wouldn't create a separate adjective lexeme for a noun unless there's a good reason to. - Nikki (talk) 22:51, 5 May 2024 (UTC)[]

There is הנגאובר/הֶנְגְּאוֹבֵר (L64880), which has a pronunciation property of F1.

The source for that form is LinguaLibre: https://lingualibre.org/wiki/Q810377 . If I add it as a simple URL (reference URL (P854)), it works. It looks not so great to add it as a URL because there's also a property specific to LinguaLibre (Lingua Libre ID (P10369)), but when I try to use that for the source (that's the current version), I get a constraint notice.

So what's the good practice for citing LinguaLibre for pronunciations? I can think of a few:

  1. Use URL. Works, but looks a bit too manual.
  2. Just add Lingua Libre ID (P10369) to the lexeme and let the user figure out that that's the source. It's probably reliable enough for humans, but not perfectly machine-readable (socially, LinguaLibre is a nice default, but there's nothing that defines it as a default).
  3. Fix the constraint.

Or maybe something else.

I welcome your advice. Amir E. Aharoni {{🌎🌍🌏}} talk 18:36, 13 April 2024 (UTC)[]

@Amire80: the LinguaLibre wikibase will most likely disappear in the future (for the SDC on Commons who can already do most of the job). Anyway, this identifier is not really a reference and is already on Commons, why put it again on Wikidata? (especially as there is a lot more important things to improve on this lexeme, I quickly added a sense). Cheers, VIGNERON (talk) 10:41, 15 April 2024 (UTC)[]
It's not particularly important for me. I saw that it's already there and wondered whether it's possible to improve it.
If it's completely redundant, perhaps it should be removed from everywhere by a bot?
(Also, why it's more important to have a gloss in English there?) Amir E. Aharoni {{🌎🌍🌏}} talk 20:04, 15 April 2024 (UTC)[]
Yes, maybe we should remove it all by bot.
It's (relatively) more important to have a sense on a lexeme. And senses need at least one gloss ; since I don't speak Hebrew, I added it in English by default (but English is not the most important here), feel free to add it in Hebrew too (in fact, it would make more sens) or in other languages. Other important points may include: several forms (except if this word is invariable), identifiers (eg. Ma'agarim ID (P11280), BTW is it the only identifier for Hebrew?), other lexical statements (etymology, morphology, etc.), references, etc.
Cheers, VIGNERON (talk) 07:56, 18 April 2024 (UTC)[]
OK, but why is it important to have a sense on a lexeme?
As for identifiers, there's also Strong's number (P11416), albeit only for Biblical Hebrew (and it should probably also work for Biblical Aramaic and Greek). There may be other useful identifiers, I'm exploring it now. Amir E. Aharoni {{🌎🌍🌏}} talk 12:13, 18 April 2024 (UTC)[]
Not sure what to say: because words have meaning? Every dictionaries always give senses, it's probably not a coincidence .
More identifiers is a good thing.
Cheers, VIGNERON (talk) 12:01, 21 April 2024 (UTC)[]
Yes, but words also have translations, and if I recall correctly, you said elsewhere that there shouldn't be translations on Wikidata lexemes, and it confused me a lot. Why are senses important, but not translations? Amir E. Aharoni {{🌎🌍🌏}} talk 14:33, 21 April 2024 (UTC)[]
@Amire80 There is no translation without a sense, at least . If there is an item for the sense, translations are findable indirectly. Same for the Wikipedia inter language links in the pre-wikidata era, if we can avoid having a very long list of translations for a sense in each of the same-sense in different languages, and potentially the list is very long, it's a big win. We can get some translations (although of course not always or often a not perfect solution) with queries. This approach is illustrated by lexeme party tool and the lexeme challenge. We can navigate and find translations through gadgets, also. author  TomT0m / talk page 14:43, 21 April 2024 (UTC)[]
Well, yeah, that's why it's strange that @VIGNERON says that senses are important, but translations aren't (he said it elsewhere, on Telegram IIRC). What are senses so important for other than translations?
I know that it's possible to make items for senses, but there are many senses for which it's hard to make a Q item, e.g. gingerly (L191285). Amir E. Aharoni {{🌎🌍🌏}} talk 14:49, 21 April 2024 (UTC)[]
@Amire80 To be clear, do you understand that translations belong to a sense ? A sense needs at least one gloss on Wikidata. Then when the sense is created you can add a translation. So if you added a translation you necessarily added a sense. There might be no direct translation to a term in a language, a gloss might be better.
And yes, sometimes it can be hard to find a relevant item, but … maybe it should be done anyway, once we find a way and properties to model it it's done and it can help anyone and link to plenty of lexeme. Maybe we should work on more properties or a model to model senses of words like gingerly (L191285), or add WD properties to model senses themselves.
I think we should be able to express stuff such as "this is a modality of action, precise, by opposition to rough action" using properties. For example we already have an item carefulness (Q16514836). author  TomT0m / talk page 15:02, 21 April 2024 (UTC)[]
There are a lot of things I don't understand about how Lexemes work, but I think that I do understand that translations are usually associated with senses and not lexemes (albeit I can think of some scenarios where it would make more sense to translate a whole lexeme).
I'm not opposed in principle to creating Q items for every sense, but I strongly suspect that many other Wikidata editors may be. It kind of fits the "It fulfills a structural need" requirement in Wikidata:Notability, but it does stretch it. Amir E. Aharoni {{🌎🌍🌏}} talk 15:17, 21 April 2024 (UTC)[]
We have been pretty liberal for a long time on that matter, probably just for that kind of purpose.
But I think with a little thought we can be pretty far, even if we do not create strictly speaking one item per sense, with items and property.
We can already express that "gingerly actions" are actions, by subclass of (P279), and we could do that in the context of a sense (the sense statements) if not an item. We can model that "gingerly danse" is a kind of "danse" by "subclass of (P279): dance (Q11639) / walking (Q6537379)". We could find a way to add that the steps are small / careful with the right properties and items. Not totally trivial of course, but interesting. author  TomT0m / talk page 15:28, 21 April 2024 (UTC)[]
@VIGNERON, is that why you are not enthusiastic about translations? Because they should be modeled through Q items? Or for some other reason? Amir E. Aharoni {{🌎🌍🌏}} talk 15:29, 21 April 2024 (UTC)[]
@Amire80: more or less, yes. As I totally agree with what @TomT0m: said: « There is no translation without a sense ». Translations are another complicated question but it's secondary to senses (you can't add translations without senses, so senses need to come first and are "more important"). And (indeed, in most but not all cases, "gingerly" may be an exception) you can deduce/infer translation from the sense, so adding manually a data that is already there, is redundant and a waste of time (time that we may use more efficiently for other things). Same goes for other lexicographical data, like most the -nyms (synonyms, antonyms, hypernyms, hyponyms, meronyms, holonyms, etc.) or the etymology (no need to put the full tree on one Lexemes, it can easily be constructed with tools : January (L701)Januarius (L8160)Janus (L8793), no need to re-add January (L701)Janus (L8793) ; see an example of tool). The idea behind is that information (and afterwards knowledge) emerges from data, trying to store information in the data is counterproductive and - on the long term - only results in problems. Cheers, VIGNERON (talk) 07:52, 22 April 2024 (UTC)[]
A sense by itself is not structured. The gloss is human-readable text, not very machine-readable. A human who knows two languages very well and has a very large vocabulary in both of them and very good memory, too, can maybe deduce a translation from a gloss. A human who doesn't have it all cannot do it, and a machine cannot do it either.
That's why I think that glosses are useful at most as definitions in the same language (in modern productive languages; for words in extinct languages, glosses in useful modern languages are probably ok). I don't understand how glosses for senses in modern languages that are written in other languages are useful. (Also, it's very hard to write good glosses. I love reading dictionary definitions, but I don't claim to be very good at writing them.)
A real structured translation that would be useful to humans and machines is a true link from a sense to a sense in another language. Or better, to a generic sense, which is in no language at all, but is like an abstract sense hub, the way Q items are hubs for Wikipedia articles (and as I wrote above, I doubt that Q items can be good generic sense hubs for all words). But it sounds like you don't like structured translations of this kind, and I'm trying to understand why don't you like them. Or maybe I just misunderstand you in general :) Amir E. Aharoni {{🌎🌍🌏}} talk 13:27, 22 April 2024 (UTC)[]
@Amire80: why is a sense not structured? (Lexemes actually re-use - with some changes - the structures lemon The Lexicon Model for Ontologies) and why did you switch from sense (a part of a lexeme) to gloss (a part of a sense)?
For the rest, yes, gloss is for human. Deduced translations can't be done with glosses (or very badly and poorly), there done with other statements (including but not limited to the property item for this sense (P5137)) that can do the « link from a sense to a sense ». Also, yes, Q items can be seen as an « abstract sense hub » (and indeed not alone).
What I don't like (in Wikidata in general) is when data is « multiplied without necessity » (per the Occam's razor (Q131012)). It's more work for the same result (and sometime even a worse result, as it makes the database heavier and slower). Manual translation are often redundant for no reason.
Cheers, VIGNERON (talk) 15:13, 22 April 2024 (UTC)[]
So you're basically advocating for using Q items as the "sense hubs" for translations, and not using the translation (P5972) property for translations? Or am I still misunderstanding?
Is there any language in which this has been used a lot? Amir E. Aharoni {{🌎🌍🌏}} talk 15:34, 22 April 2024 (UTC)[]
@Amire80: kind of yes. You're missing a bit the point and the bigger pictures (the senses are fundamental) but yes.
item for this sense (P5137) is used 200k times on 1100 languages (with a pretty classic distribution), translation (P5972) is used 100k times on 356 languages (including 84k in Nynorsk (Q25164) and Bokmål (Q25167), almost half pointing to other ; all other languages basically don't use it, the next one is English (Q1860) with only 4k uses).
Cheers, VIGNERON (talk) 11:33, 25 April 2024 (UTC)[]
It is quite possible that I am missing the bigger picture, but there is a reason for it: even if this is a good and common practice, I do not see it documented anywhere. If this is the main way to add translations to this dictionary system, I'd expect it to be documented at Wikidata:Lexicographical data/Documentation, which is linked from the top of this page here. Or maybe it is there, and I'm missing it? (That page is longish.)
And I also haven't seen a view that uses this information to display translations. unicorn (L127-S1) and единорог (L144531-S1) both point to item for this sense (P5137), and this is correct, but can I see it anywhere as an "English-Russian dictionary" that just says "unicorn : единорог"? Or with glosses in parentheses, like "unicorn (mythical animal, a horse with one horn) : единорог (мифическое существо, конь с одним рогом)"? Or as a dictionary that translates from English to all the languages that link to this sense?
Maybe some experienced users know this bigger picture from reading a lot of discussions, but for new people who want to dive in as editors or consumers, it's hard to find it. Amir E. Aharoni {{🌎🌍🌏}} talk 13:40, 25 April 2024 (UTC)[]

For a long time I've wanted to do something about lexeme categories that are underrepresented among lexeḿes with item for this sense (P5137) statements on their senses, in particular adpositions (prepositions, circumpositions, postpositions etc). Since these typically describe relations between objects (above, in, after and so on). I believe their items should be subclasses of ̼relation (Q930933) in one way or another, I have written a partial proposal for an item tree model at [1] but also made reference to this idea in the property discussion [2] where it may have become lost in the broader discussion under that subject line. Unfortunately I haven't received much feedback, whether positive or negative on this idea,and I don't think I have enough authority to get started building these item trees on my own, since if they are going to be used it may have a significant impact on how work on lexemes in general (not only adpositions) is conducted.

Therefore I'd like to ask for your comments here, both regarding the merits of the idea as such,and the way in which we could come to an agreement on what to do. Adpositional item trees, yes or no? What's your opinion here? Is there a better place than my personal subpage or the item for this sense (P5137) property talk page where such trees can be discussed?--SM5POR (talk) 16:55, 1 May 2024 (UTC)[]

I think it's a good initiative, you should try. But what tree do you mean? I can't see a hierarchy of adpositions at your page... Infovarius (talk) 23:54, 7 May 2024 (UTC)[]
@Infovarius@Shisma,@VIGNERON,@ZI Jonyː Sorry for the unclear reference, immediately after the "in" section I have a section labelled [3] where you can expand my first tree for ̺relation (Q930933) after which I have begun working on one for conjunctions and similar lexical operators. I don't want to put too much work into my personal subpage, but I'd rather see a WikiProject dedicated to these item class trees. I also have trouble formatting and editing the language sample translation table, and wonder if there is some convenient tool to help me add an arbitrary column or row to an existing table and start filling it in with translations for comparison. After"in" I'd like to get on with "of"to help find replacement qualifiers for various statements still using the ̼of (P642) property currently being deprecated.̴̴̴̴̃ SM5POR (talk) 09:35, 8 May 2024 (UTC)[]
@Infovarius,@Mahir256ː Under [4] I found an instruction I would like to challengeː "This property is used to link a sense representing a substantive concept (typically on a noun or adjective) to a Wikidata item representing the concept." Why the apparent restriction to nouns and adjectives? Then there is ̺predicate for (P9970) which is stated to be used with verbs, I don't quite get the sense model this documentation page appears to convey and wonder whether it's considered up-to-date with current best practice.--SM5POR (talk) 09:12, 9 May 2024 (UTC)[]
@Infovarius,@Mahir256,@ZI Jonyː Sorry for bugging you all about this, but i think you have the ball in your half of the playground right now. I want to conduct this discussion as part of some active project in Wikidata, not merely in my personal wiki pages. Could you please help me establish a page or project for this discussion, if you think my idea isworth trying out? I have given you a number of referencesincluding one above to documentation which i consider unclear or incomplete,in particular whether item for this sense (P5137) should be used beyond nouns and adjectives, since using it with adpositions seems to do exactly that.--SM5POR (talk) 14:45, 13 May 2024 (UTC)[]
@SM5POR: I think any attempt at establishing a hierarchy in items of relationships—spatiotemporal or otherwise—typically expressed by adpositions should be backed up by sources, and these should try to be described in as language-neutral a fashion as possible (i.e. not make reference to specific languages or specific words in those languages). You may find a volume like Adpositions (Q119239595) useful as a starting point.
As for the comment regarding the documentation of P5137, the term "typically" doesn't introduce any sort of restriction; the use of "substantive concept" was intended to distinguish its primary—again, not a restrictive word!—use from that of P9970. I do hope to clarify it and other incomplete documentation subsections soon. Mahir256 (talk) 15:04, 13 May 2024 (UTC)[]
@Mahir256,@Infovariusː Thankyou for clarifying this, and I'm sorry for misinterpreting the documentation. I'd like to add that the Swedish word for "noun" happens to be "substantiv",possibly contributing to my reading of "substantive concept" as more restrictive than it was meant.I appreciate your suggested literaturereference and I agree completely that it should be used as a source, unfortunately I don't have access tothat work myself, so I hope someone who has will be able to contribute with citations for our item trees. But my practical question of where to conduct this discussion remains. Should we perhaps allocate a section of the documentation page,which doesn't seem to be matched with a corresponding talk page of its own, or create a separate project page somewhere else? As a technical compromise, I could offer to create one among my personal pages, but then the issue becomes one of advertising it appropriately so that anyone seeking info on item for this sense (P5137) will find the discussion and be able to participate. Where can I find best current practice with respect to project pages? Should we write something at [Wikidata:WikiProject_Interesting_Content#Suggestions_for_future_content]? Maybe it's a data quality issue(we have a page tree for those)?--SM5POR (talk) 12:27, 14 May 2024 (UTC)[]
You can try to download the book somewhere here. --Infovarius (talk) 19:16, 14 May 2024 (UTC)[]

the snowclone (Q2338287) being a phrasal template (src: https://snowclones.org/about) like "old X never die, they just Y"; the use of X, Y, Z, A, etc. seems to be the consensus way to notate the 'blanks' of the template, but it strikes me as rather ad-hoc when we're considering a semantic database that might be able to represent that concept of replaceability with better fidelity. Arlo Barnes (talk) 06:28, 17 May 2024 (UTC)[]

@Arlo Barnes: good point, not exactly sure how to deal with it ; especially as lemmas can contain X as a letter in itself and not as a placeholder (like in "X marks the spot" or "fragile X syndrome"). One way could be to explicitly put a placeholder in combines lexemes (P5238). PS: it's not only for snowclones, many phrase are in a similar case. Cheers, VIGNERON (talk) 14:19, 15 June 2024 (UTC)[]

I'm trying to add a statement with a lexeme I created a few weeks ago, and I'm still not able to. Wondering when this will be fixed. The lexeme is "bio-" (L1327069). When you search for L:bio- you get it in the results, but it says "Unknown language, Unknown..." in the display of the terms retrieved by the search. When you view the lexeme it does show that it is an English prefix. When I try to add a statement with L1327069, it says "Not found". Any idea when this will be fixed? AdamSeattle (talk) 21:06, 1 June 2024 (UTC)[]

- Indonesian version below -

Hello all,

Have you ever wondered how Wikidata stores and models words? How to create and improve Lexemes in your languages? Or even why it is useful and which projects could benefit from it?

The Lexicodays 2024 will answer these questions, and many more. During this online event, you will be able to learn more about Lexicographical Data on Wikidata, to discover how to model words in your languages, and to try out various tools that make it easier to work on Lexemes. It offers a space for editors involved in creating and maintaining Lexemes to discuss their ideas, challenges and best practices.

The online event will take place on June 28, 29 and 30, with sessions replicated in different languages and at different times across time zones. It is co-organized by Wikimedia Deutschland and the Software Collaboration Team in Indonesia, and we will focus on the languages of Indonesia and the Wikidata community in Indonesia. The event is open to everyone regardless of their knowledge of Lexemes. Most sessions will be recorded and published after the event.

On the main event page, you can discover the structure of the program, which will keep evolving in the upcoming weeks. We are also welcoming proposals for the program until June 20th - we are particularly interested in introductions to Lexicographical Data in different languages, and discussions run by community members on how to improve modelling and documentation in a specific language.

We will launch registration for the event in the upcoming days - if you’re interested, stay tuned by following the talk page or joining the Lexicographical Data Telegram group.

If you have any questions, feel free to write on the talk page of the event. See you soon, Léa (Lea Lacroix (WMDE)) and Raisha (Fexpr).

---

Halo, teman-teman!

Pernahkah Anda bertanya-tanya bagaimana Wikidata menyimpan dan memodelkan kata-kata? Bagaimana cara membuat dan meningkatkan Leksem dalam bahasa yang Anda tuturkan? Kenapa Leksem itu bermanfaat? Proyek-proyek apa yang akan terbantu dengan adanya Leksem ini?

Lexicodays 2024 akan menjawab pertanyaan-pertanyaan tersebut, dan masih banyak lagi. Selama acara daring ini, Anda akan dapat mempelajari lebih lanjut mengenai Data Leksikografis di Wikidata, menemukan cara memodelkan kata-kata dalam bahasa Anda, dan mencoba berbagai perkakas yang memudahkan Anda dalam menyunting Leksem. Acara ini membuka ruang bagi para penyunting yang terlibat dalam pembuatan dan pemeliharaan Leksem untuk saling berdiskusi mengenai ide, tantangan, maupun praktik-praktik terbaik.

Acara daring ini akan berlangsung pada tanggal 28, 29, dan 30 Juni, dengan waktu penyelenggaraan yang tersebar dalam beberapa zona waktu dan sesi-sesi serupa yang diantarkan dalam bahasa-bahasa yang berbeda. Acara ini diselenggarakan bersama oleh Wikimedia Deutschland dan Tim Kolaborasi Perangkat Lunak di Indonesia. Fokus dari acara ini adalah untuk bahasa-bahasa yang dituturkan di Indonesia dan komunitas Wikidata di Indonesia. Acara ini terbuka untuk siapa saja, terlepas dari seberapa akrab Anda dengan Leksem. Kami akan merekam sebagian besar sesi dan mempublikasikannya setelah acara selesai.

Anda dapat mengakses jadwal kegiatan pada halaman beranda acara, yang akan terus kami perbarui dalam beberapa pekan ke depan. Kami juga mengadakan panggilan terbuka untuk pengajuan proposal kegiatan hingga tanggal 20 Juni. Kami sangat tertarik dengan pengenalan Data Leksikografis dalam berbagai bahasa, dan diskusi yang dilakukan oleh anggota komunitas mengenai cara meningkatkan pemodelan dan dokumentasi dalam bahasa tertentu.

Kami akan membuka pendaftaran untuk acara ini dalam beberapa hari mendatang. Apabila Anda tertarik, silakan pantau terus laman pembicaraan ini atau bergabunglah dengan grup Telegram Data Leksikografis.

Jika Anda memiliki pertanyaan, jangan ragu untuk menulis di laman pembicaraan acara Lexicodays 2024. Sampai jumpa, Léa Lea Lacroix (WMDE) dan Raisha Fexpr. Lea Lacroix (WMDE) (talk) 09:00, 3 June 2024 (UTC)[]

Hello all,
As a reminder, the Lexicodays 2024, online event dedicated to Lexicographical Data on Wikidata, will take place on June 28, 29 and 30, with sessions replicated in different languages and at different times across time zones.
The event will take place both on Zoom and Jitsi, and the access will be free without registration (the access links will be added to the program page). However, if you’re planning to join, we invite you to add your username to the Participants page.
We also remind you that you can contribute to the program until June 20th by adding a proposal to the talk page. You’ll find more information here.
We are particularly interested in introductions to Lexicographical Data in different languages, and discussions run by community members on how to improve modelling and documentation in a specific language. You can also present tools or Lexeme usecases.
If you have any questions, feel free to reach out to Léa (Lea Lacroix (WMDE)) or Raisha (Raisha (WSC)).
We’re looking forward to seeing you at the Lexicodays! Lea Lacroix (WMDE) (talk) 10:28, 18 June 2024 (UTC)[]
Hello all,
The Lexicodays 2024 will take place this week, on June 28, 29 and 30!.
The event will take place both on Zoom and Jitsi, and the access will be free without registration (the access links will be added to the program page). However, if you’re planning to join, we invite you to add your username to the Participants page. The event will include sessions replicated in different languages and at different times across time zones.
Here are a few interesting sessions that you will find in the program:
  • Introduction to Lexicographical data and how to model words in Wikidata
  • Discussions about modelling proverbs, sayings, compound words and predicates
  • Presentation of some useful tools
  • Modelling sessions and editathons in various languages of Indonesia
  • Introduction to Abstract Wikipedia and how it will work together with Lexemes
  • Exploring how to generate sentences with Lexemes
Note that most sessions will be recorded and available after the event.
If you have any questions, feel free to reach out to Léa (Lea Lacroix (WMDE)) or Raisha (Raisha (WSC)).
We’re looking forward to seeing you at the Lexicodays! Lea Lacroix (WMDE) (talk) 10:20, 24 June 2024 (UTC)[]

Hi,

I noticed that around 1600 lexemes for verbs use instance of (P31)irregular verb (Q70235) (https://w.wiki/AX5f) where I would have rather used the more specific conjugation class (P5186)irregular verb (Q70235). What do you think, should we move it or not? and if so, does someone have a bot to move them?

Cheers, VIGNERON (talk) 12:24, 29 June 2024 (UTC)[]

irregular verb (Q70235) doesn't look like specific conjugation class (Q53996674). --Infovarius (talk) 20:05, 30 June 2024 (UTC)[]
I don't think conjugation class (P5186) would be right, an irregular verb is a verb where conjugation behaves irregularly in some way:
- The conjugation class is a property of a verb, so I don't think its values should be subclasses of verb (Q24905) (like how the gender of a masculine noun is "masculine", not "masculine noun").
- "irregular" isn't a specific conjugation class. The verb might still have a conjugation class but be irregular because it has one or more irregular forms. It might be irregular because it follows one conjugation class for some forms and another for the rest.
- Nikki (talk) 06:27, 1 July 2024 (UTC)[]

Should pa'al (Q7265893) and Fa3aL (Q114419665) (etc.) be linked in some way? My knowledge of Hebrew is very rudimentary and I haven't looked into the details, but these kind of pairs seem to be related. Disclosure: I have created the items like Fa3aL (Q114419665) and started to use them for Arabic varieties in statements like كَتَب (L1331764-F1)uses (P2283)Fa3aL (Q114419665). --Marsupium (talk) 21:54, 1 July 2024 (UTC)[]