← Index
NYTProf Performance Profile   « block view • line view • sub view »
For bin/hailo
  Run on Thu Oct 21 22:50:37 2010
Reported on Thu Oct 21 22:52:15 2010

Filename/mnt/stuff/src/my-cpan/hailo/lib/Hailo/Tokenizer/Words.pm
StatementsExecuted 2143390 statements in 11.3s
Subroutines
Calls P F Exclusive
Time
Inclusive
Time
Subroutine
25000118.16s11.4sHailo::Tokenizer::Words::::make_tokensHailo::Tokenizer::Words::make_tokens
752855311.43s1.43sHailo::Tokenizer::Words::::CORE:substHailo::Tokenizer::Words::CORE:subst (opcode)
557897311.29s1.33sHailo::Tokenizer::Words::::CORE:matchHailo::Tokenizer::Words::CORE:match (opcode)
248693141360ms360msHailo::Tokenizer::Words::::CORE:regcompHailo::Tokenizer::Words::CORE:regcomp (opcode)
1112.06ms26.2msHailo::Tokenizer::Words::::BEGIN@7Hailo::Tokenizer::Words::BEGIN@7
111705µs1.17msHailo::Tokenizer::Words::::BEGIN@8Hailo::Tokenizer::Words::BEGIN@8
11163µs141µsHailo::Tokenizer::Words::::BEGIN@3Hailo::Tokenizer::Words::BEGIN@3
2222142µs42µsHailo::Tokenizer::Words::::CORE:qrHailo::Tokenizer::Words::CORE:qr (opcode)
11116µs1.37msHailo::Tokenizer::Words::::BEGIN@9Hailo::Tokenizer::Words::BEGIN@9
11114µs20µsHailo::Tokenizer::Words::::BEGIN@4Hailo::Tokenizer::Words::BEGIN@4
11113µs357µsHailo::Tokenizer::Words::::BEGIN@6Hailo::Tokenizer::Words::BEGIN@6
11112µs78µsHailo::Tokenizer::Words::::BEGIN@3.18Hailo::Tokenizer::Words::BEGIN@3.18
11111µs662µsHailo::Tokenizer::Words::::BEGIN@5Hailo::Tokenizer::Words::BEGIN@5
1112µs2µsHailo::Tokenizer::Words::::spacingHailo::Tokenizer::Words::spacing (xsub)
0000s0sHailo::Tokenizer::Words::::make_outputHailo::Tokenizer::Words::make_output
Call graph for these subroutines as a Graphviz dot language file.
Line State
ments
Time
on line
Calls Time
in subs
Code
1package Hailo::Tokenizer::Words;
2
3478µs3284µs
# spent 78µs (12+65) within Hailo::Tokenizer::Words::BEGIN@3.18 which was called: # once (12µs+65µs) by Hailo::Tokenizer::Words::BEGIN@3 at line 3 # spent 141µs (63+78) within Hailo::Tokenizer::Words::BEGIN@3 which was called: # once (63µs+78µs) by Hailo::_new_class at line 3
use 5.010;
# spent 141µs making 1 call to Hailo::Tokenizer::Words::BEGIN@3 # spent 78µs making 1 call to Hailo::Tokenizer::Words::BEGIN@3.18 # spent 66µs making 1 call to feature::import
4227µs226µs
# spent 20µs (14+6) within Hailo::Tokenizer::Words::BEGIN@4 which was called: # once (14µs+6µs) by Hailo::_new_class at line 4
use utf8;
# spent 20µs making 1 call to Hailo::Tokenizer::Words::BEGIN@4 # spent 6µs making 1 call to utf8::import
5231µs21.31ms
# spent 662µs (11+651) within Hailo::Tokenizer::Words::BEGIN@5 which was called: # once (11µs+651µs) by Hailo::_new_class at line 5
use Any::Moose;
# spent 662µs making 1 call to Hailo::Tokenizer::Words::BEGIN@5 # spent 651µs making 1 call to Any::Moose::import
6230µs2701µs
# spent 357µs (13+344) within Hailo::Tokenizer::Words::BEGIN@6 which was called: # once (13µs+344µs) by Hailo::_new_class at line 6
use Any::Moose 'X::StrictConstructor';
# spent 357µs making 1 call to Hailo::Tokenizer::Words::BEGIN@6 # spent 344µs making 1 call to Any::Moose::import
72163µs249.8ms
# spent 26.2ms (2.06+24.1) within Hailo::Tokenizer::Words::BEGIN@7 which was called: # once (2.06ms+24.1ms) by Hailo::_new_class at line 7
use Regexp::Common qw/ URI /;
# spent 26.2ms making 1 call to Hailo::Tokenizer::Words::BEGIN@7 # spent 23.6ms making 1 call to Regexp::Common::import
82163µs21.21ms
# spent 1.17ms (705µs+460µs) within Hailo::Tokenizer::Words::BEGIN@8 which was called: # once (705µs+460µs) by Hailo::_new_class at line 8
use Text::Unidecode;
# spent 1.17ms making 1 call to Hailo::Tokenizer::Words::BEGIN@8 # spent 48µs making 1 call to Exporter::import
921.06ms22.71ms
# spent 1.37ms (16µs+1.35) within Hailo::Tokenizer::Words::BEGIN@9 which was called: # once (16µs+1.35ms) by Hailo::_new_class at line 9
use namespace::clean -except => 'meta';
# spent 1.37ms making 1 call to Hailo::Tokenizer::Words::BEGIN@9 # spent 1.35ms making 1 call to namespace::clean::import
10
1115µs110.1mswith qw(Hailo::Role::Arguments
# spent 10.1ms making 1 call to Mouse::with
12 Hailo::Role::Tokenizer);
13
14# tokenization
15112µs15µsmy $DECIMAL = qr/[.,]/;
# spent 5µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
16138µs222µsmy $NUMBER = qr/$DECIMAL?\d+(?:$DECIMAL\d+)*/;
# spent 19µs making 1 call to Hailo::Tokenizer::Words::CORE:regcomp # spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
1716µs12µsmy $APOSTROPHE = qr/['’´]/;
# spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
18127µs219µsmy $APOST_WORD = qr/[[:alpha:]]+(?:$APOSTROPHE(?:[[:alpha:]]+))+/;
# spent 17µs making 1 call to Hailo::Tokenizer::Words::CORE:regcomp # spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
1916µs12µsmy $TWAT_NAME = qr/ \@ [A-Za-z0-9_]+ /x;
# spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
2016µs12µsmy $NON_WORD = qr/[^_\d[:alpha:]]+/;
# spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
2116µs12µsmy $PLAIN_WORD = qr/[_[:alpha:]]+/;
# spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
22129µs222µsmy $ALPHA_WORD = qr/$APOST_WORD|$PLAIN_WORD/;
# spent 20µs making 1 call to Hailo::Tokenizer::Words::CORE:regcomp # spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
23132µs225µsmy $WORD_TYPES = qr/$NUMBER|$ALPHA_WORD/;
# spent 22µs making 1 call to Hailo::Tokenizer::Words::CORE:regcomp # spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
24168µs261µsmy $WORD = qr/$WORD_TYPES(?:-$WORD_TYPES)*/;
# spent 59µs making 1 call to Hailo::Tokenizer::Words::CORE:regcomp # spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
2516µs12µsmy $MIXED_CASE = qr/ \p{Lower}+ \p{Upper} /x;
# spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
2615µs12µsmy $UPPER_NONW = qr/^ \p{Upper}{2,} \W+ \p{Lower}+ $/x;
# spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
27
28# capitalization
29# The rest of the regexes are pretty hairy. The goal here is to catch the
30# most common cases where a word should be capitalized. We try hard to
31# guard against capitalizing things which don't look like proper words.
32# Examples include URLs and code snippets.
3315µs12µsmy $OPEN_QUOTE = qr/['"‘“„«»「『‹‚]/;
# spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
3415µs12µsmy $CLOSE_QUOTE = qr/['"’“”«»」』›‘]/;
# spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
3515µs12µsmy $TERMINATOR = qr/(?:[?!‽]+|(?<!\.)\.)/;
# spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
3616µs12µsmy $ADDRESS = qr/:/;
# spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
3715µs12µsmy $PUNCTUATION = qr/[?!‽,;.:]/;
# spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
38136µs228µsmy $BOUNDARY = qr/$CLOSE_QUOTE?(?:\s*$TERMINATOR|$ADDRESS)\s+$OPEN_QUOTE?\s*/;
# spent 27µs making 1 call to Hailo::Tokenizer::Words::CORE:regcomp # spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
39143µs236µsmy $LOOSE_WORD = qr/(?:$NUMBER|$APOST_WORD|\w+)(?:-(?:$NUMBER|$APOST_WORD|\w+))*/;
# spent 34µs making 1 call to Hailo::Tokenizer::Words::CORE:regcomp # spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
40191µs282µsmy $SPLIT_WORD = qr{$LOOSE_WORD(?:/$LOOSE_WORD)?(?=$PUNCTUATION(?: |$)|$CLOSE_QUOTE|$TERMINATOR| |$)};
# spent 80µs making 1 call to Hailo::Tokenizer::Words::CORE:regcomp # spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
41
42# we want to capitalize words that come after "On example.com?"
43# or "You mean 3.2?", but not "Yes, e.g."
44157µs249µsmy $DOTTED_STRICT = qr/$LOOSE_WORD(?:$DECIMAL(?:\d+|\w{2,}))?/;
# spent 47µs making 1 call to Hailo::Tokenizer::Words::CORE:regcomp # spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
45179µs272µsmy $WORD_STRICT = qr/$DOTTED_STRICT(?:$APOSTROPHE$DOTTED_STRICT)*/;
# spent 69µs making 1 call to Hailo::Tokenizer::Words::CORE:regcomp # spent 2µs making 1 call to Hailo::Tokenizer::Words::CORE:qr
46
47# input -> tokens
48
# spent 11.4s (8.16+3.19) within Hailo::Tokenizer::Words::make_tokens which was called 25000 times, avg 454µs/call: # 25000 times (8.16s+3.19s) by Hailo::_learn_one at line 288 of lib/Hailo.pm, avg 454µs/call
sub make_tokens {
492500035.1ms my ($self, $line) = @_;
50
512500026.7ms my @tokens;
5225000108ms1507µs my @chunks = split /\s+/, $line;
# spent 507µs making 1 call to utf8::SWASHNEW
53
54 # process all whitespace-delimited chunks
552500062.0ms for my $chunk (@chunks) {
56131949108ms my $got_word;
57
58131949463ms while (length $chunk) {
59 # We convert it to ASCII and then look for a URI because $RE{URI}
60 # from Regexp::Common doesn't support non-ASCII domain names
61154636174ms my $ascii = $chunk;
62154636886ms154637325ms $ascii =~ s/[^[:ascii:]]/a/g;
# spent 324ms making 154636 calls to Hailo::Tokenizer::Words::CORE:subst, avg 2µs/call # spent 373µs making 1 call to utf8::SWASHNEW
63
64 # URIs
651546363.04s6401681.18s if ($ascii =~ / ^ $RE{URI} /xo) {
# spent 772ms making 309219 calls to Hailo::Tokenizer::Words::CORE:match, avg 2µs/call # spent 397ms making 330923 calls to Hailo::Tokenizer::Words::CORE:subst, avg 1µs/call # spent 12.6ms making 19 calls to utf8::SWASHNEW, avg 665µs/call # spent 1.20ms making 4 calls to Hailo::Tokenizer::Words::CORE:regcomp, avg 300µs/call # spent 125µs making 1 call to Regexp::Common::Entry::__ANON__[Regexp/Common.pm:268] # spent 93µs making 1 call to Regexp::Common::_decache # spent 33µs making 1 call to Regexp::Common::FETCH
66 my $uri_end = $+[0];
67 my $uri = substr $chunk, 0, $uri_end;
68 $chunk =~ s/^\Q$uri//;
69
70 push @tokens, [$self->{_spacing_normal}, $uri];
71 $got_word = 1;
72 }
73 # ssh:// (and foo+ssh://) URIs
74 elsif ($chunk =~ s{ ^ (?<uri> (?:\w+\+) ssh:// \S+ ) }{}xo) {
75 push @tokens, [$self->{_spacing_normal}, $+{uri}];
76 $got_word = 1;
77 }
78 # Twitter names
79 elsif ($chunk =~ s/ ^ (?<twat> $TWAT_NAME ) //xo) {
80 # Names on Twitter/Identi.ca can only match
81 # @[A-Za-z0-9_]+. I tested this on ~800k Twatterhose
82 # names.
8353560µs53115µs push @tokens, [$self->{_spacing_normal}, $+{twat}];
# spent 115µs making 53 calls to Tie::Hash::NamedCapture::FETCH, avg 2µs/call
845364µs $got_word = 1;
85 }
86 # normal words
87 elsif ($chunk =~ / ^ $WORD /xo) {
88 # there's probably a simpler way to accomplish this
89132932139ms my @words;
90132932134ms while (1) {
912672961.63s267309710ms last if $chunk !~ s/^($WORD)//o;
# spent 708ms making 267296 calls to Hailo::Tokenizer::Words::CORE:subst, avg 3µs/call # spent 1.85ms making 12 calls to utf8::SWASHNEW, avg 154µs/call # spent 63µs making 1 call to Hailo::Tokenizer::Words::CORE:regcomp
92134364397ms push @words, $1;
93 }
94
95132932324ms for my $word (@words) {
96 # Maybe preserve the casing of this word
971343642.86s493049955ms $word = lc $word
# spent 547ms making 246522 calls to Hailo::Tokenizer::Words::CORE:match, avg 2µs/call # spent 355ms making 246522 calls to Hailo::Tokenizer::Words::CORE:regcomp, avg 1µs/call # spent 53.2ms making 5 calls to utf8::SWASHNEW, avg 10.6ms/call
98 if $word ne uc $word
99 # Mixed-case words like "WoW"
100 and $word !~ $MIXED_CASE
101 # Words that are upper case followed by a non-word character.
102 # {2,} so it doesn't match I'm
103 and $word !~ $UPPER_NONW;
104 }
105
106132932307ms if (@words == 1) {
107 push @tokens, [$self->{_spacing_normal}, $words[0]];
108 }
109 elsif (@words == 2) {
110 # When there are two words joined together, we need to
111 # decide if it's normal+postfix (e.g. "4.1GB") or
112 # prefix+normal (e.g. "v2.3")
113
11488822.2ms431711.4ms if ($words[0] =~ /$NUMBER/ && $words[1] =~ /$ALPHA_WORD/) {
# spent 7.08ms making 2156 calls to Hailo::Tokenizer::Words::CORE:match, avg 3µs/call # spent 3.61ms making 2156 calls to Hailo::Tokenizer::Words::CORE:regcomp, avg 2µs/call # spent 714µs making 5 calls to utf8::SWASHNEW, avg 143µs/call
1155091.31ms push @tokens, [$self->{_spacing_normal}, $words[0]];
1165091.01ms push @tokens, [$self->{_spacing_postfix}, $words[1]];
117 }
118 elsif ($words[0] =~ /$ALPHA_WORD/ && $words[1] =~ /$NUMBER/) {
119379985µs push @tokens, [$self->{_spacing_prefix}, $words[0]];
120379718µs push @tokens, [$self->{_spacing_normal}, $words[1]];
121 }
122 }
123 else {
124 # When 3 or more words are together, (e.g. "800x600"),
125 # we treat them as two normal tokens surrounding one or
126 # more infix tokens
127162915µs push @tokens, [$self->{_spacing_normal}, $_] for $words[0];
1281621.32ms push @tokens, [$self->{_spacing_infix}, $_] for @words[1..$#words-1];
129162718µs push @tokens, [$self->{_spacing_normal}, $_] for $words[-1];
130 }
131
132132932169ms $got_word = 1;
133 }
134 # everything else
135 elsif ($chunk =~ s/ ^ (?<non_word> $NON_WORD ) //xo) {
13621651200ms2165146.3ms my $non_word = $+{non_word};
# spent 46.3ms making 21651 calls to Tie::Hash::NamedCapture::FETCH, avg 2µs/call
1372165128.5ms my $spacing = $self->{_spacing_normal};
138
139 # was the previous token a word?
1402165133.4ms if ($got_word) {
141 $spacing = length $chunk
142 ? $self->{_spacing_infix}
143 : $self->{_spacing_postfix};
144 }
145 # do we still have more tokens in this chunk?
146 elsif (length $chunk) {
147 $spacing = $self->{_spacing_prefix};
148 }
149
1502165140.5ms push @tokens, [$spacing, $non_word];
151 }
152 }
153 }
15425000115ms return \@tokens;
155}
156
157# tokens -> output
158sub make_output {
159 my ($self, $tokens) = @_;
160 my $reply = '';
161
162 for my $pos (0 .. $#{ $tokens }) {
163 my ($spacing, $text) = @{ $tokens->[$pos] };
164 $reply .= $text;
165
166 # append whitespace if this is not a prefix token or infix token,
167 # and this is not the last token, and the next token is not
168 # a postfix/infix token
169 if ($pos != $#{ $tokens }
170 && $spacing != $self->{_spacing_prefix}
171 && $spacing != $self->{_spacing_infix}
172 && !($pos < $#{ $tokens }
173 && ($tokens->[$pos+1][0] == $self->{_spacing_postfix}
174 || $tokens->[$pos+1][0] == $self->{_spacing_infix})
175 )
176 ) {
177 $reply .= ' ';
178 }
179 }
180
181 # capitalize the first word
182 $reply =~ s/^\s*$OPEN_QUOTE?\s*\K($SPLIT_WORD)(?=(?:$TERMINATOR+|$ADDRESS|$PUNCTUATION+)?\b)/\u$1/o;
183
184 # capitalize the second word
185 $reply =~ s/^\s*$OPEN_QUOTE?\s*$SPLIT_WORD(?:(?:\s*$TERMINATOR|$ADDRESS)\s+)\K($SPLIT_WORD)/\u$1/o;
186
187 # capitalize all other words after word boundaries
188 # we do it in two passes because we need to match two words at a time
189 $reply =~ s/ $OPEN_QUOTE?\s*$WORD_STRICT$BOUNDARY\K($SPLIT_WORD)/\x1B\u$1\x1B/go;
190 $reply =~ s/\x1B$WORD_STRICT\x1B$BOUNDARY\K($SPLIT_WORD)/\u$1/go;
191 $reply =~ s/\x1B//go;
192
193 # end paragraphs with a period when it makes sense
194 $reply =~ s/(?: |^)$OPEN_QUOTE?$SPLIT_WORD$CLOSE_QUOTE?\K$/./o;
195
196 # capitalize I'm, I've...
197 $reply =~ s{(?: |$OPEN_QUOTE)\Ki(?=$APOSTROPHE(?:[[:alpha:]]))}{I}go;
198
199 return $reply;
200}
201
202132µs2134µs__PACKAGE__->meta->make_immutable;
# spent 118µs making 1 call to Mouse::Meta::Class::make_immutable # spent 16µs making 1 call to Hailo::Tokenizer::Words::meta
203
204=encoding utf8
205
206=head1 NAME
207
208Hailo::Tokenizer::Words - A tokenizer for L<Hailo|Hailo> which splits
209on whitespace, mostly.
210
211=head1 DESCRIPTION
212
213This tokenizer does its best to handle various languages. It knows about most
214apostrophes, quotes, and sentence terminators.
215
216=head1 AUTHOR
217
218Hinrik E<Ouml>rn SigurE<eth>sson, hinrik.sig@gmail.com
219
220=head1 LICENSE AND COPYRIGHT
221
222Copyright 2010 Hinrik E<Ouml>rn SigurE<eth>sson
223
224This program is free software, you can redistribute it and/or modify
225it under the same terms as Perl itself.
226
227114µs14.11ms=cut
 
# spent 1.33s (1.29+35.2ms) within Hailo::Tokenizer::Words::CORE:match which was called 557897 times, avg 2µs/call: # 309219 times (760ms+12.3ms) by Hailo::Tokenizer::Words::make_tokens at line 65, avg 2µs/call # 246522 times (525ms+22.3ms) by Hailo::Tokenizer::Words::make_tokens at line 97, avg 2µs/call # 2156 times (6.37ms+714µs) by Hailo::Tokenizer::Words::make_tokens at line 114, avg 3µs/call
sub Hailo::Tokenizer::Words::CORE:match; # opcode
# spent 42µs within Hailo::Tokenizer::Words::CORE:qr which was called 22 times, avg 2µs/call: # once (5µs+0s) by Hailo::_new_class at line 15 # once (2µs+0s) by Hailo::_new_class at line 16 # once (2µs+0s) by Hailo::_new_class at line 40 # once (2µs+0s) by Hailo::_new_class at line 18 # once (2µs+0s) by Hailo::_new_class at line 23 # once (2µs+0s) by Hailo::_new_class at line 44 # once (2µs+0s) by Hailo::_new_class at line 45 # once (2µs+0s) by Hailo::_new_class at line 17 # once (2µs+0s) by Hailo::_new_class at line 22 # once (2µs+0s) by Hailo::_new_class at line 39 # once (2µs+0s) by Hailo::_new_class at line 24 # once (2µs+0s) by Hailo::_new_class at line 25 # once (2µs+0s) by Hailo::_new_class at line 37 # once (2µs+0s) by Hailo::_new_class at line 19 # once (2µs+0s) by Hailo::_new_class at line 21 # once (2µs+0s) by Hailo::_new_class at line 36 # once (2µs+0s) by Hailo::_new_class at line 20 # once (2µs+0s) by Hailo::_new_class at line 38 # once (2µs+0s) by Hailo::_new_class at line 35 # once (2µs+0s) by Hailo::_new_class at line 33 # once (2µs+0s) by Hailo::_new_class at line 26 # once (2µs+0s) by Hailo::_new_class at line 34
sub Hailo::Tokenizer::Words::CORE:qr; # opcode
# spent 360ms (360+218µs) within Hailo::Tokenizer::Words::CORE:regcomp which was called 248693 times, avg 1µs/call: # 246522 times (355ms+0s) by Hailo::Tokenizer::Words::make_tokens at line 97, avg 1µs/call # 2156 times (3.61ms+0s) by Hailo::Tokenizer::Words::make_tokens at line 114, avg 2µs/call # 4 times (981µs+218µs) by Hailo::Tokenizer::Words::make_tokens at line 65, avg 300µs/call # once (80µs+0s) by Hailo::_new_class at line 40 # once (69µs+0s) by Hailo::_new_class at line 45 # once (63µs+0s) by Hailo::Tokenizer::Words::make_tokens at line 91 # once (59µs+0s) by Hailo::_new_class at line 24 # once (47µs+0s) by Hailo::_new_class at line 44 # once (34µs+0s) by Hailo::_new_class at line 39 # once (27µs+0s) by Hailo::_new_class at line 38 # once (22µs+0s) by Hailo::_new_class at line 23 # once (20µs+0s) by Hailo::_new_class at line 22 # once (19µs+0s) by Hailo::_new_class at line 16 # once (17µs+0s) by Hailo::_new_class at line 18
sub Hailo::Tokenizer::Words::CORE:regcomp; # opcode
# spent 1.43s (1.43+2.59ms) within Hailo::Tokenizer::Words::CORE:subst which was called 752855 times, avg 2µs/call: # 330923 times (396ms+366µs) by Hailo::Tokenizer::Words::make_tokens at line 65, avg 1µs/call # 267296 times (706ms+1.85ms) by Hailo::Tokenizer::Words::make_tokens at line 91, avg 3µs/call # 154636 times (324ms+373µs) by Hailo::Tokenizer::Words::make_tokens at line 62, avg 2µs/call
sub Hailo::Tokenizer::Words::CORE:subst; # opcode
# spent 2µs within Hailo::Tokenizer::Words::spacing which was called: # once (2µs+0s) by Hailo::Role::Tokenizer::BUILD at line 26 of lib/Hailo/Role/Tokenizer.pm
sub Hailo::Tokenizer::Words::spacing; # xsub