Modifier and Type | Method and Description |
---|---|
AnalyzedTokenReadings[] |
AnalyzedSentence.getPreDisambigTokens() |
AnalyzedTokenReadings[] |
AnalyzedSentence.getPreDisambigTokensWithoutWhitespace() |
AnalyzedTokenReadings[] |
AnalyzedSentence.getTokens()
Returns the
AnalyzedTokenReadings of the analyzed text. |
AnalyzedTokenReadings[] |
AnalyzedSentence.getTokensWithoutWhitespace()
Returns the
AnalyzedTokenReadings of the analyzed text, with
whitespace tokens removed but with the artificial SENT_START
token included. |
Constructor and Description |
---|
AnalyzedSentence(AnalyzedTokenReadings[] tokens)
Creates an AnalyzedSentence from the given
AnalyzedTokenReadings . |
AnalyzedSentence(AnalyzedTokenReadings[] tokens,
AnalyzedTokenReadings[] preDisambigTokens) |
AnalyzedSentence(AnalyzedTokenReadings[] tokens,
AnalyzedTokenReadings[] preDisambigTokens) |
AnalyzedTokenReadings(AnalyzedTokenReadings oldAtr,
List<AnalyzedToken> newReadings,
String ruleApplied) |
Modifier and Type | Method and Description |
---|---|
AnalyzedTokenReadings |
ChunkTaggedToken.getReadings() |
Modifier and Type | Method and Description |
---|---|
void |
RussianChunker.addChunkTags(List<AnalyzedTokenReadings> tokenReadings) |
void |
GermanChunker.addChunkTags(List<AnalyzedTokenReadings> tokenReadings) |
void |
EnglishChunker.addChunkTags(List<AnalyzedTokenReadings> tokenReadings) |
void |
Chunker.addChunkTags(List<AnalyzedTokenReadings> sentenceTokenReadings) |
Constructor and Description |
---|
ChunkTaggedToken(String token,
List<ChunkTag> chunkTags,
AnalyzedTokenReadings readings) |
Modifier and Type | Method and Description |
---|---|
List<String> |
ContextBuilder.getContext(AnalyzedTokenReadings[] tokens,
int pos,
int contextSize) |
Modifier and Type | Method and Description |
---|---|
void |
NoopChunker.addChunkTags(List<AnalyzedTokenReadings> tokenReadings) |
Modifier and Type | Method and Description |
---|---|
protected abstract AnalyzedTokenReadings |
AbstractStatisticSentenceStyleRule.conditionFulfilled(List<AnalyzedTokenReadings> tokens)
Condition to generate a hint (possibly including all exceptions)
Returns:
< nAnalysedToken, if condition is not fulfilled
>= nAnalysedToken, if condition is not fulfilled; integer is number of token which is the end hint
|
static AnalyzedTokenReadings |
GRPCUtils.fromGRPC(MLServerProto.AnalyzedTokenReadings tokenReadings) |
Modifier and Type | Method and Description |
---|---|
protected abstract List<AnalyzedTokenReadings> |
PartialPosTagFilter.tag(String token) |
Modifier and Type | Method and Description |
---|---|
RuleMatch |
AbstractTextToNumberFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AdaptSuggestionsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
ConvertToSentenceCaseFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AbstractDateCheckFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AbstractMakeContractionsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AddCommasFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AbstractSuppressMisspelledSuggestionsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AbstractFindSuggestionsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
DateRangeChecker.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
WhitespaceCheckFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
PartialPosTagFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AbstractNumberInWordFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AbstractAdvancedSynthesizerFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
ShortenedYearRangeChecker.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AbstractFutureDateFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AbstractSuppressIfAnyRuleMatchesFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
MultitokenSpellerFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AbstractNewYearDateFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
protected void |
AbstractSimpleReplaceRule2.addToQueue(AnalyzedTokenReadings token,
Queue<AnalyzedTokenReadings> prevTokens) |
protected abstract int |
AbstractStatisticStyleRule.conditionFulfilled(AnalyzedTokenReadings[] tokens,
int nAnalysedToken)
Condition to generate a hint (possibly including all exceptions)
Returns:
< nAnalysedToken, if condition is not fulfilled
>= nAnalysedToken, if condition is not fulfilled; integer is number of token which is the end hint
|
protected RuleMatch |
AbstractSimpleReplaceRule.createRuleMatch(AnalyzedTokenReadings tokenReadings,
List<String> replacements,
AnalyzedSentence sentence,
String originalTokenStr) |
protected List<RuleMatch> |
AbstractSimpleReplaceRule.findMatches(AnalyzedTokenReadings tokenReadings,
AnalyzedSentence sentence) |
protected abstract List<String> |
AbstractFindSuggestionsFilter.getSpellingSuggestions(AnalyzedTokenReadings atr) |
protected List<String> |
WordRepeatBeginningRule.getSuggestions(AnalyzedTokenReadings analyzedToken) |
boolean |
WordRepeatRule.ignore(AnalyzedTokenReadings[] tokens,
int position)
Implement this method to return
true if there's
a potential word repetition at the current position that should be ignored,
i.e. if no error should be created. |
protected boolean |
WordRepeatBeginningRule.isAdverb(AnalyzedTokenReadings token) |
boolean |
ParagraphRepeatBeginningRule.isArticle(AnalyzedTokenReadings token) |
protected abstract boolean |
AbstractStyleTooOftenUsedWordRule.isException(AnalyzedTokenReadings token)
An exception is defined for the token
|
protected boolean |
UppercaseSentenceStartRule.isException(AnalyzedTokenReadings[] tokens,
int tokenIdx) |
protected boolean |
CommaWhitespaceRule.isException(AnalyzedTokenReadings[] tokens,
int tokenIdx) |
protected boolean |
AbstractFillerWordsRule.isException(AnalyzedTokenReadings[] tokens,
int num) |
protected abstract boolean |
AbstractRepeatedWordsRule.isException(AnalyzedTokenReadings[] tokens,
int i,
boolean sentStart,
boolean isCapitalized,
boolean isAllUppercase) |
protected boolean |
AbstractStyleRepeatedWordRule.isExceptionPair(AnalyzedTokenReadings token1,
AnalyzedTokenReadings token2) |
protected boolean |
AbstractStatisticSentenceStyleRule.isMark(AnalyzedTokenReadings token) |
protected boolean |
GenericUnpairedBracketsRule.isNoException(String token,
AnalyzedTokenReadings[] tokens,
int i,
int j,
boolean precSpace,
boolean follSpace,
UnsyncStack<SymbolLocator> symbolStack)
Generic method to specify an exception.
|
protected boolean |
AbstractStatisticSentenceStyleRule.isOpeningQuote(AnalyzedTokenReadings token) |
protected abstract boolean |
AbstractTextToNumberFilter.isPercentage(AnalyzedTokenReadings[] patternTokens,
int i) |
protected boolean |
AbstractFindSuggestionsFilter.isSuggestionException(AnalyzedTokenReadings analyzedSuggestion) |
protected boolean |
AbstractSimpleReplaceRule.isTagged(AnalyzedTokenReadings tokenReadings)
This method allows to override which tags will mark token as tagged
|
protected abstract boolean |
AbstractStyleTooOftenUsedWordRule.isToCountedWord(AnalyzedTokenReadings token)
A token that has to be counted
|
protected boolean |
AbstractSimpleReplaceRule2.isTokenException(AnalyzedTokenReadings atr) |
protected boolean |
AbstractSimpleReplaceRule.isTokenException(AnalyzedTokenReadings atr) |
protected abstract boolean |
AbstractStyleRepeatedWordRule.isTokenPair(AnalyzedTokenReadings[] tokens,
int n,
boolean before) |
protected abstract boolean |
AbstractStyleRepeatedWordRule.isTokenToCheck(AnalyzedTokenReadings token) |
protected abstract boolean |
AbstractStatisticStyleRule.sentenceConditionFulfilled(AnalyzedTokenReadings[] tokens,
int nAnalysedToken)
Condition to generate a hint related to the sentence (possibly including all exceptions)
|
protected URL |
AbstractStyleRepeatedWordRule.setURL(AnalyzedTokenReadings token) |
protected abstract String |
AbstractStyleTooOftenUsedWordRule.toAddedLemma(AnalyzedTokenReadings token)
Gives back the lemma that should be added to the word map
|
static MLServerProto.AnalyzedTokenReadings |
GRPCUtils.toGRPC(AnalyzedTokenReadings readings) |
protected boolean |
WordRepeatRule.wordRepetitionOf(String word,
AnalyzedTokenReadings[] tokens,
int position) |
Modifier and Type | Method and Description |
---|---|
protected void |
AbstractSimpleReplaceRule2.addToQueue(AnalyzedTokenReadings token,
Queue<AnalyzedTokenReadings> prevTokens) |
protected abstract AnalyzedTokenReadings |
AbstractStatisticSentenceStyleRule.conditionFulfilled(List<AnalyzedTokenReadings> tokens)
Condition to generate a hint (possibly including all exceptions)
Returns:
< nAnalysedToken, if condition is not fulfilled
>= nAnalysedToken, if condition is not fulfilled; integer is number of token which is the end hint
|
Modifier and Type | Method and Description |
---|---|
boolean |
ArabicWordRepeatRule.ignore(AnalyzedTokenReadings[] tokens,
int position) |
Modifier and Type | Method and Description |
---|---|
RuleMatch |
ArabicAdjectiveToExclamationFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
ArabicDMYDateCheckFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
ArabicVerbToMafoulMutlaqFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
ArabicMasdarToVerbFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
ArabicNumberPhraseFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
static List<String> |
ArabicNumberPhraseFilter.prepareSuggestion(String numPhrase,
String previousWord,
AnalyzedTokenReadings nextWord,
boolean feminin,
boolean attached,
String inflection) |
List<String> |
ArabicNumberPhraseFilter.prepareSuggestionWithUnits(String numPhrase,
String previousWord,
AnalyzedTokenReadings nextWord,
boolean feminin,
boolean attached,
String inflection) |
Modifier and Type | Method and Description |
---|---|
RuleMatch |
DiacriticsCheckFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AnarASuggestionsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
OblidarseSugestionsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
PostponedAdjectiveConcordanceFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
CatalanNumberSpellerFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
SynthesizeWithDeterminerFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
PortarTempsSuggestionsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AdjustPronounsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
FindSuggestionsEsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
DonarTempsSuggestionsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
protected List<String> |
FindSuggestionsFilter.getSpellingSuggestions(AnalyzedTokenReadings atr) |
protected List<String> |
CatalanWordRepeatBeginningRule.getSuggestions(AnalyzedTokenReadings token) |
static String[] |
PronomsFeblesHelper.getTwoNextPronouns(AnalyzedTokenReadings[] tokens,
int from) |
boolean |
CatalanWordRepeatRule.ignore(AnalyzedTokenReadings[] tokens,
int position) |
protected boolean |
CatalanWordRepeatBeginningRule.isAdverb(AnalyzedTokenReadings token) |
protected boolean |
CatalanRepeatedWordsRule.isException(AnalyzedTokenReadings[] tokens,
int i,
boolean sentStart,
boolean isCapitalized,
boolean isAllUppercase) |
protected boolean |
CatalanUnpairedBracketsRule.isNoException(String tokenStr,
AnalyzedTokenReadings[] tokens,
int i,
int j,
boolean precSpace,
boolean follSpace,
UnsyncStack<SymbolLocator> symbolStack) |
protected boolean |
TextToNumberFilter.isPercentage(AnalyzedTokenReadings[] patternTokens,
int i) |
protected boolean |
FindSuggestionsFilter.isSuggestionException(AnalyzedTokenReadings analyzedSuggestion) |
protected boolean |
SimpleReplaceDiacriticsIEC.isTokenException(AnalyzedTokenReadings atr) |
protected boolean |
SimpleReplaceBalearicRule.isTokenException(AnalyzedTokenReadings atr) |
protected boolean |
SimpleReplaceAnglicism.isTokenException(AnalyzedTokenReadings atr) |
Modifier and Type | Method and Description |
---|---|
protected AnalyzedTokenReadings |
SentenceWithModalVerbRule.conditionFulfilled(List<AnalyzedTokenReadings> sentence)
Is sentence with modal verb
|
protected AnalyzedTokenReadings |
PassiveSentenceRule.conditionFulfilled(List<AnalyzedTokenReadings> sentence)
Is passive sentence
|
protected AnalyzedTokenReadings |
SentenceWithManRule.conditionFulfilled(List<AnalyzedTokenReadings> sentence)
Is sentence with verb man
|
protected AnalyzedTokenReadings |
ConjunctionAtBeginOfSentenceRule.conditionFulfilled(List<AnalyzedTokenReadings> sentence)
Is sentence with modal verb
|
Modifier and Type | Method and Description |
---|---|
RuleMatch |
YMDNewYearDateFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
CompoundCheckFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
ValidWordFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
RemoveUnknownCompoundsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
InsertCommaFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
UppercaseNounReadingFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
RecentYearFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
AdaptSuggestionFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
PotentialCompoundFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
YMDDateCheckFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
protected int |
NonSignificantVerbsRule.conditionFulfilled(AnalyzedTokenReadings[] tokens,
int nAnalysedToken) |
protected int |
UnnecessaryPhraseRule.conditionFulfilled(AnalyzedTokenReadings[] tokens,
int nAnalysedToken) |
protected int |
GermanFillerWordsRule.conditionFulfilled(AnalyzedTokenReadings[] tokens,
int nToken) |
static boolean |
GermanHelper.hasReadingOfType(AnalyzedTokenReadings tokenReadings,
GermanToken.POSType type) |
boolean |
GermanWordRepeatRule.ignore(AnalyzedTokenReadings[] tokens,
int position) |
protected boolean |
GermanWordRepeatBeginningRule.isAdverb(AnalyzedTokenReadings token) |
boolean |
GermanParagraphRepeatBeginningRule.isArticle(AnalyzedTokenReadings token) |
protected boolean |
StyleTooOftenUsedNounRule.isException(AnalyzedTokenReadings token) |
protected boolean |
StyleTooOftenUsedVerbRule.isException(AnalyzedTokenReadings token) |
protected boolean |
StyleTooOftenUsedAdjectiveRule.isException(AnalyzedTokenReadings token) |
protected boolean |
GermanCommaWhitespaceRule.isException(AnalyzedTokenReadings[] tokens,
int tokenIdx) |
protected boolean |
GermanRepeatedWordsRule.isException(AnalyzedTokenReadings[] tokens,
int i,
boolean sentStart,
boolean isCapitalized,
boolean isAllUppercase) |
protected boolean |
GermanStyleRepeatedWordRule.isExceptionPair(AnalyzedTokenReadings token1,
AnalyzedTokenReadings token2) |
protected boolean |
StyleTooOftenUsedNounRule.isToCountedWord(AnalyzedTokenReadings token) |
protected boolean |
StyleTooOftenUsedVerbRule.isToCountedWord(AnalyzedTokenReadings token) |
protected boolean |
StyleTooOftenUsedAdjectiveRule.isToCountedWord(AnalyzedTokenReadings token) |
protected boolean |
GermanStyleRepeatedWordRule.isTokenPair(AnalyzedTokenReadings[] tokens,
int n,
boolean before)
Pairs of substantive are excluded like "Arm in Arm", "Seite an Seite", etc.
|
protected boolean |
GermanStyleRepeatedWordRule.isTokenToCheck(AnalyzedTokenReadings token)
Only substantive, names, verbs and adjectives are checked
|
protected boolean |
NonSignificantVerbsRule.sentenceConditionFulfilled(AnalyzedTokenReadings[] tokens,
int nToken) |
protected boolean |
UnnecessaryPhraseRule.sentenceConditionFulfilled(AnalyzedTokenReadings[] tokens,
int nToken) |
protected boolean |
GermanFillerWordsRule.sentenceConditionFulfilled(AnalyzedTokenReadings[] tokens,
int nToken) |
protected URL |
GermanStyleRepeatedWordRule.setURL(AnalyzedTokenReadings token) |
protected String |
StyleTooOftenUsedNounRule.toAddedLemma(AnalyzedTokenReadings token) |
protected String |
StyleTooOftenUsedVerbRule.toAddedLemma(AnalyzedTokenReadings token) |
protected String |
StyleTooOftenUsedAdjectiveRule.toAddedLemma(AnalyzedTokenReadings token) |
Modifier and Type | Method and Description |
---|---|
protected AnalyzedTokenReadings |
SentenceWithModalVerbRule.conditionFulfilled(List<AnalyzedTokenReadings> sentence)
Is sentence with modal verb
|
protected AnalyzedTokenReadings |
PassiveSentenceRule.conditionFulfilled(List<AnalyzedTokenReadings> sentence)
Is passive sentence
|
protected AnalyzedTokenReadings |
SentenceWithManRule.conditionFulfilled(List<AnalyzedTokenReadings> sentence)
Is sentence with verb man
|
protected AnalyzedTokenReadings |
ConjunctionAtBeginOfSentenceRule.conditionFulfilled(List<AnalyzedTokenReadings> sentence)
Is sentence with modal verb
|
Modifier and Type | Method and Description |
---|---|
protected List<String> |
GreekWordRepeatBeginningRule.getSuggestions(AnalyzedTokenReadings token) |
protected boolean |
GreekWordRepeatBeginningRule.isAdverb(AnalyzedTokenReadings token) |
Modifier and Type | Method and Description |
---|---|
protected List<AnalyzedTokenReadings> |
EnglishPartialPosTagFilter.tag(String token) |
protected List<AnalyzedTokenReadings> |
NoDisambiguationEnglishPartialPosTagFilter.tag(String token) |
Modifier and Type | Method and Description |
---|---|
RuleMatch |
AdverbFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
YMDNewYearDateFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
YMDDateCheckFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
OrdinalSuffixFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
protected List<RuleMatch> |
AbstractEnglishSpellerRule.getRuleMatches(String word,
int startPos,
AnalyzedSentence sentence,
List<RuleMatch> ruleMatchesSoFar,
int idx,
AnalyzedTokenReadings[] tokens) |
protected List<String> |
FindSuggestionsFilter.getSpellingSuggestions(AnalyzedTokenReadings atr) |
protected List<String> |
EnglishWordRepeatBeginningRule.getSuggestions(AnalyzedTokenReadings token) |
boolean |
EnglishWordRepeatRule.ignore(AnalyzedTokenReadings[] tokens,
int position) |
protected boolean |
EnglishWordRepeatBeginningRule.isAdverb(AnalyzedTokenReadings token) |
protected boolean |
StyleTooOftenUsedNounRule.isException(AnalyzedTokenReadings token) |
protected boolean |
StyleTooOftenUsedVerbRule.isException(AnalyzedTokenReadings token) |
protected boolean |
StyleTooOftenUsedAdjectiveRule.isException(AnalyzedTokenReadings token) |
protected boolean |
EnglishRepeatedWordsRule.isException(AnalyzedTokenReadings[] tokens,
int i,
boolean sentStart,
boolean isCapitalized,
boolean isAllUppercase) |
protected boolean |
EnglishUnpairedBracketsRule.isNoException(String tokenStr,
AnalyzedTokenReadings[] tokens,
int i,
int j,
boolean precSpace,
boolean follSpace,
UnsyncStack<SymbolLocator> symbolStack) |
protected boolean |
StyleTooOftenUsedNounRule.isToCountedWord(AnalyzedTokenReadings token) |
protected boolean |
StyleTooOftenUsedVerbRule.isToCountedWord(AnalyzedTokenReadings token) |
protected boolean |
StyleTooOftenUsedAdjectiveRule.isToCountedWord(AnalyzedTokenReadings token) |
protected String |
StyleTooOftenUsedNounRule.toAddedLemma(AnalyzedTokenReadings token) |
protected String |
StyleTooOftenUsedVerbRule.toAddedLemma(AnalyzedTokenReadings token) |
protected String |
StyleTooOftenUsedAdjectiveRule.toAddedLemma(AnalyzedTokenReadings token) |
Modifier and Type | Method and Description |
---|---|
RuleMatch |
PostponedAdjectiveConcordanceFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
ConfusionCheckFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
protected List<String> |
FindSuggestionsFilter.getSpellingSuggestions(AnalyzedTokenReadings atr) |
protected List<String> |
SpanishWordRepeatBeginningRule.getSuggestions(AnalyzedTokenReadings token) |
boolean |
SpanishWordRepeatRule.ignore(AnalyzedTokenReadings[] tokens,
int position) |
protected boolean |
SpanishWordRepeatBeginningRule.isAdverb(AnalyzedTokenReadings token) |
protected boolean |
SpanishRepeatedWordsRule.isException(AnalyzedTokenReadings[] tokens,
int i,
boolean sentStart,
boolean isCapitalized,
boolean isAllUppercase) |
protected boolean |
SpanishUnpairedBracketsRule.isNoException(String tokenStr,
AnalyzedTokenReadings[] tokens,
int i,
int j,
boolean precSpace,
boolean follSpace,
UnsyncStack<SymbolLocator> symbolStack) |
protected boolean |
TextToNumberFilter.isPercentage(AnalyzedTokenReadings[] patternTokens,
int i) |
Modifier and Type | Method and Description |
---|---|
boolean |
PersianWordRepeatRule.ignore(AnalyzedTokenReadings[] tokens,
int position) |
protected boolean |
PersianWordRepeatBeginningRule.isAdverb(AnalyzedTokenReadings token) |
Modifier and Type | Method and Description |
---|---|
protected List<AnalyzedTokenReadings> |
FrenchPartialPosTagFilter.tag(String token) |
Modifier and Type | Method and Description |
---|---|
RuleMatch |
SuggestionsFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
PostponedAdjectiveConcordanceFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
WordWithDeterminerFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
InterrogativeVerbFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
DMYDateCheckFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
protected List<String> |
FindSuggestionsFilter.getSpellingSuggestions(AnalyzedTokenReadings atr) |
protected boolean |
QuestionWhitespaceRule.isAllowedWhitespaceChar(AnalyzedTokenReadings[] tokens,
int i) |
protected boolean |
QuestionWhitespaceStrictRule.isAllowedWhitespaceChar(AnalyzedTokenReadings[] tokens,
int i) |
protected boolean |
FrenchRepeatedWordsRule.isException(AnalyzedTokenReadings[] tokens,
int i,
boolean sentStart,
boolean isCapitalized,
boolean isAllUppercase) |
Modifier and Type | Method and Description |
---|---|
protected List<AnalyzedTokenReadings> |
IrishPartialPosTagFilter.tag(String token) |
protected List<AnalyzedTokenReadings> |
NoDisambiguationIrishPartialPosTagFilter.tag(String token) |
Modifier and Type | Method and Description |
---|---|
boolean |
ItalianWordRepeatRule.ignore(AnalyzedTokenReadings[] tokens,
int position) |
Modifier and Type | Method and Description |
---|---|
RuleMatch |
CompoundFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
Modifier and Type | Field and Description |
---|---|
protected AnalyzedTokenReadings[] |
AbstractPatternRulePerformer.unifiedTokens |
Modifier and Type | Method and Description |
---|---|
AnalyzedTokenReadings |
MatchState.filterReadings() |
AnalyzedTokenReadings[] |
Unifier.getFinalUnified()
Used for getting a unified sequence in case when simple test method
Unifier.isUnified(AnalyzedToken, Map, boolean) } was used. |
AnalyzedTokenReadings[] |
Unifier.getUnifiedTokens()
Gets a full sequence of filtered tokens.
|
Modifier and Type | Method and Description |
---|---|
RuleMatch |
ApostropheTypeFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
abstract RuleMatch |
RuleFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens)
Returns the original rule match or a modified one, or
null
if the rule match is filtered out. |
void |
Unifier.addNeutralElement(AnalyzedTokenReadings analyzedTokenReadings)
Used to add neutral elements (
AnalyzedTokenReadings to the
unified sequence. |
PatternToken |
PatternToken.compile(AnalyzedTokenReadings token,
Synthesizer synth)
Prepare PatternToken for matching by formatting its string token and POS (if the Element is supposed
to refer to some other token).
|
MatchState |
Match.createState(Synthesizer synthesizer,
AnalyzedTokenReadings token)
Creates a state used for actually matching a token.
|
MatchState |
Match.createState(Synthesizer synthesizer,
AnalyzedTokenReadings[] tokens,
int index,
int next)
Creates a state used for actually matching a token.
|
protected void |
AbstractPatternRulePerformer.doMatch(AnalyzedSentence sentence,
AnalyzedTokenReadings[] tokens,
AbstractPatternRulePerformer.MatchConsumer consumer) |
protected int |
RuleFilter.getPosition(String fromStr,
AnalyzedTokenReadings[] patternTokens,
RuleMatch match) |
Map<String,String> |
RuleFilterEvaluator.getResolvedArguments(String filterArgs,
AnalyzedTokenReadings[] patternTokens,
int patternTokenPos,
List<Integer> tokenPositions)
Resolves the backref arguments, e.g. replaces
\1 by the value of the first token in the pattern. |
protected boolean |
RuleFilter.isMatchAtSentenceStart(AnalyzedTokenReadings[] tokens,
RuleMatch match) |
boolean |
PatternToken.isMatchedByPreviousException(AnalyzedTokenReadings prevToken)
Checks whether an exception for a previous token matches all readings of a given token (in case
the exception had scope == "previous").
|
boolean |
PatternTokenMatcher.isMatchedByPreviousException(AnalyzedTokenReadings token) |
boolean |
RuleFilter.matches(Map<String,String> arguments,
AnalyzedTokenReadings[] patternTokens,
int firstMatchToken) |
void |
PatternTokenMatcher.resolveReference(int firstMatchToken,
AnalyzedTokenReadings[] tokens,
Language language) |
RuleMatch |
RuleFilterEvaluator.runFilter(String filterArgs,
RuleMatch ruleMatch,
AnalyzedTokenReadings[] patternTokens,
int patternTokenPos,
List<Integer> tokenPositions) |
void |
MatchState.setToken(AnalyzedTokenReadings token) |
void |
MatchState.setToken(AnalyzedTokenReadings[] tokens,
int index,
int next)
Sets the token to be formatted etc. and includes the support for
including the skipped tokens.
|
protected boolean |
PatternRuleMatcher.testAllReadings(AnalyzedTokenReadings[] tokens,
PatternTokenMatcher matcher,
PatternTokenMatcher prevElement,
int tokenNo,
int firstMatchToken,
int prevSkipNext) |
protected boolean |
AbstractPatternRulePerformer.testAllReadings(AnalyzedTokenReadings[] tokens,
PatternTokenMatcher matcher,
PatternTokenMatcher prevElement,
int tokenNo,
int firstMatchToken,
int prevSkipNext) |
Modifier and Type | Method and Description |
---|---|
RuleMatch |
DecadeSpellingFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
protected List<RuleMatch> |
MorfologikPolishSpellerRule.getRuleMatches(String word,
int startPos,
AnalyzedSentence sentence,
List<RuleMatch> ruleMatchesSoFar,
int idx,
AnalyzedTokenReadings[] tokens) |
Modifier and Type | Method and Description |
---|---|
protected List<AnalyzedTokenReadings> |
NoDisambiguationPortuguesePartialPosTagFilter.tag(String token) |
Modifier and Type | Method and Description |
---|---|
RuleMatch |
YMDNewYearDateFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
ConfusionCheckFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
RomanNumeralFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
YMDDateCheckFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
RuleMatch |
RegularIrregularParticipleFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> arguments,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
boolean |
PortugueseWordRepeatRule.ignore(AnalyzedTokenReadings[] tokens,
int position) |
protected boolean |
PortugueseWordRepeatBeginningRule.isAdverb(AnalyzedTokenReadings token) |
boolean |
PortugueseFillerWordsRule.isException(AnalyzedTokenReadings[] tokens,
int num) |
protected boolean |
PortugueseBarbarismsRule.isTokenException(AnalyzedTokenReadings atr) |
protected boolean |
PortugalPortugueseReplaceRule.isTokenException(AnalyzedTokenReadings atr) |
protected boolean |
BrazilianPortugueseReplaceRule.isTokenException(AnalyzedTokenReadings atr) |
Modifier and Type | Method and Description |
---|---|
protected boolean |
RomanianWordRepeatBeginningRule.isAdverb(AnalyzedTokenReadings token) |
Modifier and Type | Method and Description |
---|---|
protected List<AnalyzedTokenReadings> |
RussianPartialPosTagFilter.tag(String token) |
protected List<AnalyzedTokenReadings> |
NoDisambiguationRussianPartialPosTagFilter.tag(String token) |
Modifier and Type | Method and Description |
---|---|
RuleMatch |
INNNumberFilter.acceptRuleMatch(RuleMatch match,
Map<String,String> args,
int patternTokenPos,
AnalyzedTokenReadings[] patternTokens) |
boolean |
RussianSimpleWordRepeatRule.ignore(AnalyzedTokenReadings[] tokens,
int position) |
protected boolean |
MorfologikRussianSpellerRule.ignoreToken(AnalyzedTokenReadings[] tokens,
int idx) |
protected boolean |
MorfologikRussianYOSpellerRule.ignoreToken(AnalyzedTokenReadings[] tokens,
int idx) |
boolean |
RussianFillerWordsRule.isException(AnalyzedTokenReadings[] tokens,
int num) |
Modifier and Type | Method and Description |
---|---|
protected boolean |
SpellingCheckRule.ignoreToken(AnalyzedTokenReadings[] tokens,
int idx)
Returns true iff the token at the given position should be ignored by the spell checker.
|
Modifier and Type | Method and Description |
---|---|
protected List<RuleMatch> |
MorfologikSpellerRule.getRuleMatches(String word,
int startPos,
AnalyzedSentence sentence,
List<RuleMatch> ruleMatchesSoFar,
int idx,
AnalyzedTokenReadings[] tokens) |
Modifier and Type | Method and Description |
---|---|
protected List<RuleMatch> |
SimpleReplaceRule.findMatches(AnalyzedTokenReadings tokenReadings,
AnalyzedSentence sentence) |
static Set<String> |
CaseGovernmentHelper.getCaseGovernments(AnalyzedTokenReadings analyzedTokenReadings,
Pattern posTag) |
static Set<String> |
CaseGovernmentHelper.getCaseGovernments(AnalyzedTokenReadings analyzedTokenReadings,
String startPosTag) |
static TokenAgreementPrepNounExceptionHelper.RuleException |
TokenAgreementPrepNounExceptionHelper.getExceptionInfl(AnalyzedTokenReadings[] tokens,
int i,
AnalyzedTokenReadings prepTokenReadings,
Set<String> posTagsToFind) |
static TokenAgreementPrepNounExceptionHelper.RuleException |
TokenAgreementPrepNounExceptionHelper.getExceptionInfl(AnalyzedTokenReadings[] tokens,
int i,
AnalyzedTokenReadings prepTokenReadings,
Set<String> posTagsToFind) |
static TokenAgreementPrepNounExceptionHelper.RuleException |
TokenAgreementPrepNounExceptionHelper.getExceptionNonInfl(AnalyzedTokenReadings[] tokens,
int i,
AnalyzedTokenReadings prepTokenReadings,
Set<String> posTagsToFind) |
static TokenAgreementPrepNounExceptionHelper.RuleException |
TokenAgreementPrepNounExceptionHelper.getExceptionNonInfl(AnalyzedTokenReadings[] tokens,
int i,
AnalyzedTokenReadings prepTokenReadings,
Set<String> posTagsToFind) |
static TokenAgreementPrepNounExceptionHelper.RuleException |
TokenAgreementPrepNounExceptionHelper.getExceptionStrong(AnalyzedTokenReadings[] tokens,
int i,
AnalyzedTokenReadings prepTokenReadings,
Set<String> posTagsToFind) |
static TokenAgreementPrepNounExceptionHelper.RuleException |
TokenAgreementPrepNounExceptionHelper.getExceptionStrong(AnalyzedTokenReadings[] tokens,
int i,
AnalyzedTokenReadings prepTokenReadings,
Set<String> posTagsToFind) |
static boolean |
CaseGovernmentHelper.hasCaseGovernment(AnalyzedTokenReadings analyzedTokenReadings,
Pattern startPosTag,
String rvCase) |
static boolean |
CaseGovernmentHelper.hasCaseGovernment(AnalyzedTokenReadings analyzedTokenReadings,
String rvCase) |
static boolean |
LemmaHelper.hasLemma(AnalyzedTokenReadings analyzedTokenReadings,
Collection<String> lemmas) |
static boolean |
LemmaHelper.hasLemma(AnalyzedTokenReadings analyzedTokenReadings,
Collection<String> lemmas,
Pattern posRegex) |
static boolean |
LemmaHelper.hasLemma(AnalyzedTokenReadings analyzedTokenReadings,
List<String> lemmas,
String partPos) |
static boolean |
LemmaHelper.hasLemma(AnalyzedTokenReadings analyzedTokenReadings,
Pattern pattern) |
static boolean |
LemmaHelper.hasLemma(AnalyzedTokenReadings analyzedTokenReadings,
Pattern pattern,
Pattern posTagRegex) |
static boolean |
LemmaHelper.hasLemma(AnalyzedTokenReadings analyzedTokenReadings,
String lemma) |
static boolean |
LemmaHelper.hasLemmaBase(AnalyzedTokenReadings analyzedTokenReadings,
Collection<String> lemmas,
Pattern posRegex) |
boolean |
UkrainianWordRepeatRule.ignore(AnalyzedTokenReadings[] tokens,
int position) |
protected boolean |
MorfologikUkrainianSpellerRule.ignoreToken(AnalyzedTokenReadings[] tokens,
int idx) |
static boolean |
LemmaHelper.isDash(AnalyzedTokenReadings analyzedTokenReadings) |
protected boolean |
UkrainianCommaWhitespaceRule.isException(AnalyzedTokenReadings[] tokens,
int tokenIdx) |
protected boolean |
UkrainianUppercaseSentenceStartRule.isException(AnalyzedTokenReadings[] tokens,
int tokenIdx) |
static boolean |
TokenAgreementNounVerbExceptionHelper.isException(AnalyzedTokenReadings[] tokens,
int nounPos,
int verbPos,
List<org.languagetool.rules.uk.VerbInflectionHelper.Inflection> nounInflections,
List<org.languagetool.rules.uk.VerbInflectionHelper.Inflection> verbInflections,
List<AnalyzedToken> nounTokenReadings,
List<AnalyzedToken> verbTokenReadings) |
static boolean |
TokenAgreementVerbNounExceptionHelper.isException(AnalyzedTokenReadings[] tokens,
int verbPos,
int nounAdjPos,
org.languagetool.rules.uk.TokenAgreementVerbNounRule.State state,
List<org.languagetool.rules.uk.VerbInflectionHelper.Inflection> verbInflections,
List<org.languagetool.rules.uk.VerbInflectionHelper.Inflection> nounAdjInflections,
List<AnalyzedToken> verbTokenReadings,
List<AnalyzedToken> nounTokenReadings) |
static boolean |
LemmaHelper.isInitial(AnalyzedTokenReadings analyzedTokenReadings) |
protected boolean |
SimpleReplaceRule.isTagged(AnalyzedTokenReadings tokenReadings) |
protected boolean |
SimpleReplaceSoftRule.isTokenException(AnalyzedTokenReadings atr) |
Modifier and Type | Method and Description |
---|---|
protected List<String> |
RemoteSynthesizer.synthesize(String languageCode,
AnalyzedTokenReadings atrs,
boolean postagRegexp,
String postagSelect,
String postagReplace,
String lemmaReplace) |
Modifier and Type | Method and Description |
---|---|
AnalyzedTokenReadings |
BaseTagger.createNullToken(String token,
int startPos) |
AnalyzedTokenReadings |
Tagger.createNullToken(String token,
int startPos)
Create the AnalyzedToken used for whitespace and other non-words.
|
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
BaseTagger.tag(List<String> sentenceTokens) |
List<AnalyzedTokenReadings> |
Tagger.tag(List<String> sentenceTokens)
Returns a list of
AnalyzedToken s that assigns each term in the
sentence some kind of part-of-speech information (not necessarily just one tag). |
Modifier and Type | Method and Description |
---|---|
AnalyzedTokenReadings |
ArabicTagger.tag(String word) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
ArabicTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
List<String> |
ArabicTagger.getLemmas(AnalyzedTokenReadings patternTokens,
String type) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
BretonTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
CatalanTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
AnalyzedTokenReadings |
GermanTagger.lookup(String word)
Return only the first reading of the given word or
null . |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
GermanTagger.tag(List<String> sentenceTokens) |
List<AnalyzedTokenReadings> |
SwissGermanTagger.tag(List<String> sentenceTokens,
boolean ignoreCase) |
List<AnalyzedTokenReadings> |
GermanTagger.tag(List<String> sentenceTokens,
boolean ignoreCase) |
Modifier and Type | Method and Description |
---|---|
protected AnalyzedTokenReadings |
MultiWordChunker2.prepareNewReading(String tokens,
String tok,
AnalyzedTokenReadings token,
String tag) |
Modifier and Type | Method and Description |
---|---|
protected boolean |
MultiWordChunker2.matches(String matchText,
AnalyzedTokenReadings inputTokens) |
protected AnalyzedTokenReadings |
MultiWordChunker2.prepareNewReading(String tokens,
String tok,
AnalyzedTokenReadings token,
String tag) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
EnglishTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
AnalyzedTokenReadings |
EsperantoTagger.createNullToken(String token,
int startPos) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
EsperantoTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
SpanishTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
FrenchTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
IrishTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
GalicianTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
AnalyzedTokenReadings |
JapaneseTagger.createNullToken(String token,
int startPos) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
JapaneseTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
DutchTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
PolishTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
PortugueseTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
RussianTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
static String |
PosTagHelper.getGenders(AnalyzedTokenReadings tokenReadings,
Pattern posTagRegex) |
static String |
PosTagHelper.getGenders(AnalyzedTokenReadings tokenReadings,
String posTagRegex) |
static boolean |
PosTagHelper.hasMaleUA(AnalyzedTokenReadings tokenReadings) |
static boolean |
PosTagHelper.hasPosTag(AnalyzedTokenReadings analyzedTokenReadings,
Pattern posTagRegex) |
static boolean |
PosTagHelper.hasPosTag(AnalyzedTokenReadings analyzedTokenReadings,
String posTagRegex) |
static boolean |
PosTagHelper.hasPosTagAndToken(AnalyzedTokenReadings tokens,
Pattern postag,
Pattern token) |
static boolean |
PosTagHelper.hasPosTagPart(AnalyzedTokenReadings analyzedTokenReadings,
String posTagPart) |
static boolean |
PosTagHelper.hasPosTagPartAll(AnalyzedTokenReadings analyzedTokenReadings,
String posTagPart) |
static boolean |
PosTagHelper.hasPosTagStart(AnalyzedTokenReadings analyzedTokenReadings,
String posTagPart) |
static boolean |
PosTagHelper.isUnknownWord(AnalyzedTokenReadings analyzedTokenReadings) |
Modifier and Type | Method and Description |
---|---|
AnalyzedTokenReadings |
DemoTagger.createNullToken(String token,
int startPos) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
DemoTagger.tag(List<String> sentenceTokens) |
Modifier and Type | Method and Description |
---|---|
AnalyzedTokenReadings |
ChineseTagger.createNullToken(String token,
int startPos) |
Modifier and Type | Method and Description |
---|---|
List<AnalyzedTokenReadings> |
ChineseTagger.tag(List<String> sentenceTokens) |