NlpSemanticParsingAnnotationEvalData
NLPInfrastructureGoogleApi.ContentWarehouse.V1.Model.NlpSemanticParsingAnnotationEvalData
2
out of 10
Low
SEO Impact
Annotators whose semantics are represented via a protocol message should add to that message a field or extension of this type and set it using Annotator::PopulateAnnotationEvalData to enable span-based evaluation metrics in training. Evaluation is done based on token spans. The byte span aligns with the token span and is used when saving examples. Background: In some settings, the examples used to induce/train a grammar do not specify complete semantics of an annotation. For example, some examples that come from Ewok specify only the span associated with each annotation. This message allows evaluation metrics to test the span by embedding it in the semantics. LINT.IfChange
SEO Analysis
AI GeneratedBackend infrastructure with indirect SEO impact. This model (Nlp Semantic Parsing Annotation Eval Data) contains 5 attributes that define its data structure. Key functionality includes: Additional spans after the first. Empty in all additional_spans.
Actionable Insights for SEOs
- Understanding this model helps SEOs grasp Google's internal data architecture
Attributes
5Sort:|Filter:
additionalSpansNlpSemanticParsingAnnotationEvalData →Default:
nilFull type: list(GoogleApi.ContentWarehouse.V1.Model.NlpSemanticParsingAnnotationEvalData.tAdditional spans after the first. Empty in all additional_spans.
numBytesinteger(Default:
nilnumTokensinteger(Default:
nilstartByteinteger(Default:
nilByte position within the utterance. Safe to use across different components of the NLU stack as long as said components have access to the same query.
startTokeninteger(Default:
nilToken position. This is cleared when normalizing examples for storage because tokenization changes over time. DO NOT use these two fields across components that use different tokenizations.