You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

2953 lines
102 KiB

  1. """Header value parser implementing various email-related RFC parsing rules.
  2. The parsing methods defined in this module implement various email related
  3. parsing rules. Principal among them is RFC 5322, which is the followon
  4. to RFC 2822 and primarily a clarification of the former. It also implements
  5. RFC 2047 encoded word decoding.
  6. RFC 5322 goes to considerable trouble to maintain backward compatibility with
  7. RFC 822 in the parse phase, while cleaning up the structure on the generation
  8. phase. This parser supports correct RFC 5322 generation by tagging white space
  9. as folding white space only when folding is allowed in the non-obsolete rule
  10. sets. Actually, the parser is even more generous when accepting input than RFC
  11. 5322 mandates, following the spirit of Postel's Law, which RFC 5322 encourages.
  12. Where possible deviations from the standard are annotated on the 'defects'
  13. attribute of tokens that deviate.
  14. The general structure of the parser follows RFC 5322, and uses its terminology
  15. where there is a direct correspondence. Where the implementation requires a
  16. somewhat different structure than that used by the formal grammar, new terms
  17. that mimic the closest existing terms are used. Thus, it really helps to have
  18. a copy of RFC 5322 handy when studying this code.
  19. Input to the parser is a string that has already been unfolded according to
  20. RFC 5322 rules. According to the RFC this unfolding is the very first step, and
  21. this parser leaves the unfolding step to a higher level message parser, which
  22. will have already detected the line breaks that need unfolding while
  23. determining the beginning and end of each header.
  24. The output of the parser is a TokenList object, which is a list subclass. A
  25. TokenList is a recursive data structure. The terminal nodes of the structure
  26. are Terminal objects, which are subclasses of str. These do not correspond
  27. directly to terminal objects in the formal grammar, but are instead more
  28. practical higher level combinations of true terminals.
  29. All TokenList and Terminal objects have a 'value' attribute, which produces the
  30. semantically meaningful value of that part of the parse subtree. The value of
  31. all whitespace tokens (no matter how many sub-tokens they may contain) is a
  32. single space, as per the RFC rules. This includes 'CFWS', which is herein
  33. included in the general class of whitespace tokens. There is one exception to
  34. the rule that whitespace tokens are collapsed into single spaces in values: in
  35. the value of a 'bare-quoted-string' (a quoted-string with no leading or
  36. trailing whitespace), any whitespace that appeared between the quotation marks
  37. is preserved in the returned value. Note that in all Terminal strings quoted
  38. pairs are turned into their unquoted values.
  39. All TokenList and Terminal objects also have a string value, which attempts to
  40. be a "canonical" representation of the RFC-compliant form of the substring that
  41. produced the parsed subtree, including minimal use of quoted pair quoting.
  42. Whitespace runs are not collapsed.
  43. Comment tokens also have a 'content' attribute providing the string found
  44. between the parens (including any nested comments) with whitespace preserved.
  45. All TokenList and Terminal objects have a 'defects' attribute which is a
  46. possibly empty list all of the defects found while creating the token. Defects
  47. may appear on any token in the tree, and a composite list of all defects in the
  48. subtree is available through the 'all_defects' attribute of any node. (For
  49. Terminal notes x.defects == x.all_defects.)
  50. Each object in a parse tree is called a 'token', and each has a 'token_type'
  51. attribute that gives the name from the RFC 5322 grammar that it represents.
  52. Not all RFC 5322 nodes are produced, and there is one non-RFC 5322 node that
  53. may be produced: 'ptext'. A 'ptext' is a string of printable ascii characters.
  54. It is returned in place of lists of (ctext/quoted-pair) and
  55. (qtext/quoted-pair).
  56. XXX: provide complete list of token types.
  57. """
  58. import re
  59. import urllib # For urllib.parse.unquote
  60. from collections import namedtuple, OrderedDict
  61. from email import _encoded_words as _ew
  62. from email import errors
  63. from email import utils
  64. #
  65. # Useful constants and functions
  66. #
  67. WSP = set(' \t')
  68. CFWS_LEADER = WSP | set('(')
  69. SPECIALS = set(r'()<>@,:;.\"[]')
  70. ATOM_ENDS = SPECIALS | WSP
  71. DOT_ATOM_ENDS = ATOM_ENDS - set('.')
  72. # '.', '"', and '(' do not end phrases in order to support obs-phrase
  73. PHRASE_ENDS = SPECIALS - set('."(')
  74. TSPECIALS = (SPECIALS | set('/?=')) - set('.')
  75. TOKEN_ENDS = TSPECIALS | WSP
  76. ASPECIALS = TSPECIALS | set("*'%")
  77. ATTRIBUTE_ENDS = ASPECIALS | WSP
  78. EXTENDED_ATTRIBUTE_ENDS = ATTRIBUTE_ENDS - set('%')
  79. def quote_string(value):
  80. return '"'+str(value).replace('\\', '\\\\').replace('"', r'\"')+'"'
  81. #
  82. # Accumulator for header folding
  83. #
  84. class _Folded:
  85. def __init__(self, maxlen, policy):
  86. self.maxlen = maxlen
  87. self.policy = policy
  88. self.lastlen = 0
  89. self.stickyspace = None
  90. self.firstline = True
  91. self.done = []
  92. self.current = []
  93. def newline(self):
  94. self.done.extend(self.current)
  95. self.done.append(self.policy.linesep)
  96. self.current.clear()
  97. self.lastlen = 0
  98. def finalize(self):
  99. if self.current:
  100. self.newline()
  101. def __str__(self):
  102. return ''.join(self.done)
  103. def append(self, stoken):
  104. self.current.append(stoken)
  105. def append_if_fits(self, token, stoken=None):
  106. if stoken is None:
  107. stoken = str(token)
  108. l = len(stoken)
  109. if self.stickyspace is not None:
  110. stickyspace_len = len(self.stickyspace)
  111. if self.lastlen + stickyspace_len + l <= self.maxlen:
  112. self.current.append(self.stickyspace)
  113. self.lastlen += stickyspace_len
  114. self.current.append(stoken)
  115. self.lastlen += l
  116. self.stickyspace = None
  117. self.firstline = False
  118. return True
  119. if token.has_fws:
  120. ws = token.pop_leading_fws()
  121. if ws is not None:
  122. self.stickyspace += str(ws)
  123. stickyspace_len += len(ws)
  124. token._fold(self)
  125. return True
  126. if stickyspace_len and l + 1 <= self.maxlen:
  127. margin = self.maxlen - l
  128. if 0 < margin < stickyspace_len:
  129. trim = stickyspace_len - margin
  130. self.current.append(self.stickyspace[:trim])
  131. self.stickyspace = self.stickyspace[trim:]
  132. stickyspace_len = trim
  133. self.newline()
  134. self.current.append(self.stickyspace)
  135. self.current.append(stoken)
  136. self.lastlen = l + stickyspace_len
  137. self.stickyspace = None
  138. self.firstline = False
  139. return True
  140. if not self.firstline:
  141. self.newline()
  142. self.current.append(self.stickyspace)
  143. self.current.append(stoken)
  144. self.stickyspace = None
  145. self.firstline = False
  146. return True
  147. if self.lastlen + l <= self.maxlen:
  148. self.current.append(stoken)
  149. self.lastlen += l
  150. return True
  151. if l < self.maxlen:
  152. self.newline()
  153. self.current.append(stoken)
  154. self.lastlen = l
  155. return True
  156. return False
  157. #
  158. # TokenList and its subclasses
  159. #
  160. class TokenList(list):
  161. token_type = None
  162. def __init__(self, *args, **kw):
  163. super().__init__(*args, **kw)
  164. self.defects = []
  165. def __str__(self):
  166. return ''.join(str(x) for x in self)
  167. def __repr__(self):
  168. return '{}({})'.format(self.__class__.__name__,
  169. super().__repr__())
  170. @property
  171. def value(self):
  172. return ''.join(x.value for x in self if x.value)
  173. @property
  174. def all_defects(self):
  175. return sum((x.all_defects for x in self), self.defects)
  176. #
  177. # Folding API
  178. #
  179. # parts():
  180. #
  181. # return a list of objects that constitute the "higher level syntactic
  182. # objects" specified by the RFC as the best places to fold a header line.
  183. # The returned objects must include leading folding white space, even if
  184. # this means mutating the underlying parse tree of the object. Each object
  185. # is only responsible for returning *its* parts, and should not drill down
  186. # to any lower level except as required to meet the leading folding white
  187. # space constraint.
  188. #
  189. # _fold(folded):
  190. #
  191. # folded: the result accumulator. This is an instance of _Folded.
  192. # (XXX: I haven't finished factoring this out yet, the folding code
  193. # pretty much uses this as a state object.) When the folded.current
  194. # contains as much text as will fit, the _fold method should call
  195. # folded.newline.
  196. # folded.lastlen: the current length of the test stored in folded.current.
  197. # folded.maxlen: The maximum number of characters that may appear on a
  198. # folded line. Differs from the policy setting in that "no limit" is
  199. # represented by +inf, which means it can be used in the trivially
  200. # logical fashion in comparisons.
  201. #
  202. # Currently no subclasses implement parts, and I think this will remain
  203. # true. A subclass only needs to implement _fold when the generic version
  204. # isn't sufficient. _fold will need to be implemented primarily when it is
  205. # possible for encoded words to appear in the specialized token-list, since
  206. # there is no generic algorithm that can know where exactly the encoded
  207. # words are allowed. A _fold implementation is responsible for filling
  208. # lines in the same general way that the top level _fold does. It may, and
  209. # should, call the _fold method of sub-objects in a similar fashion to that
  210. # of the top level _fold.
  211. #
  212. # XXX: I'm hoping it will be possible to factor the existing code further
  213. # to reduce redundancy and make the logic clearer.
  214. @property
  215. def parts(self):
  216. klass = self.__class__
  217. this = []
  218. for token in self:
  219. if token.startswith_fws():
  220. if this:
  221. yield this[0] if len(this)==1 else klass(this)
  222. this.clear()
  223. end_ws = token.pop_trailing_ws()
  224. this.append(token)
  225. if end_ws:
  226. yield klass(this)
  227. this = [end_ws]
  228. if this:
  229. yield this[0] if len(this)==1 else klass(this)
  230. def startswith_fws(self):
  231. return self[0].startswith_fws()
  232. def pop_leading_fws(self):
  233. if self[0].token_type == 'fws':
  234. return self.pop(0)
  235. return self[0].pop_leading_fws()
  236. def pop_trailing_ws(self):
  237. if self[-1].token_type == 'cfws':
  238. return self.pop(-1)
  239. return self[-1].pop_trailing_ws()
  240. @property
  241. def has_fws(self):
  242. for part in self:
  243. if part.has_fws:
  244. return True
  245. return False
  246. def has_leading_comment(self):
  247. return self[0].has_leading_comment()
  248. @property
  249. def comments(self):
  250. comments = []
  251. for token in self:
  252. comments.extend(token.comments)
  253. return comments
  254. def fold(self, *, policy):
  255. # max_line_length 0/None means no limit, ie: infinitely long.
  256. maxlen = policy.max_line_length or float("+inf")
  257. folded = _Folded(maxlen, policy)
  258. self._fold(folded)
  259. folded.finalize()
  260. return str(folded)
  261. def as_encoded_word(self, charset):
  262. # This works only for things returned by 'parts', which include
  263. # the leading fws, if any, that should be used.
  264. res = []
  265. ws = self.pop_leading_fws()
  266. if ws:
  267. res.append(ws)
  268. trailer = self.pop(-1) if self[-1].token_type=='fws' else ''
  269. res.append(_ew.encode(str(self), charset))
  270. res.append(trailer)
  271. return ''.join(res)
  272. def cte_encode(self, charset, policy):
  273. res = []
  274. for part in self:
  275. res.append(part.cte_encode(charset, policy))
  276. return ''.join(res)
  277. def _fold(self, folded):
  278. for part in self.parts:
  279. tstr = str(part)
  280. tlen = len(tstr)
  281. try:
  282. str(part).encode('us-ascii')
  283. except UnicodeEncodeError:
  284. if any(isinstance(x, errors.UndecodableBytesDefect)
  285. for x in part.all_defects):
  286. charset = 'unknown-8bit'
  287. else:
  288. # XXX: this should be a policy setting
  289. charset = 'utf-8'
  290. tstr = part.cte_encode(charset, folded.policy)
  291. tlen = len(tstr)
  292. if folded.append_if_fits(part, tstr):
  293. continue
  294. # Peel off the leading whitespace if any and make it sticky, to
  295. # avoid infinite recursion.
  296. ws = part.pop_leading_fws()
  297. if ws is not None:
  298. # Peel off the leading whitespace and make it sticky, to
  299. # avoid infinite recursion.
  300. folded.stickyspace = str(part.pop(0))
  301. if folded.append_if_fits(part):
  302. continue
  303. if part.has_fws:
  304. part._fold(folded)
  305. continue
  306. # There are no fold points in this one; it is too long for a single
  307. # line and can't be split...we just have to put it on its own line.
  308. folded.append(tstr)
  309. folded.newline()
  310. def pprint(self, indent=''):
  311. print('\n'.join(self._pp(indent='')))
  312. def ppstr(self, indent=''):
  313. return '\n'.join(self._pp(indent=''))
  314. def _pp(self, indent=''):
  315. yield '{}{}/{}('.format(
  316. indent,
  317. self.__class__.__name__,
  318. self.token_type)
  319. for token in self:
  320. if not hasattr(token, '_pp'):
  321. yield (indent + ' !! invalid element in token '
  322. 'list: {!r}'.format(token))
  323. else:
  324. for line in token._pp(indent+' '):
  325. yield line
  326. if self.defects:
  327. extra = ' Defects: {}'.format(self.defects)
  328. else:
  329. extra = ''
  330. yield '{}){}'.format(indent, extra)
  331. class WhiteSpaceTokenList(TokenList):
  332. @property
  333. def value(self):
  334. return ' '
  335. @property
  336. def comments(self):
  337. return [x.content for x in self if x.token_type=='comment']
  338. class UnstructuredTokenList(TokenList):
  339. token_type = 'unstructured'
  340. def _fold(self, folded):
  341. if any(x.token_type=='encoded-word' for x in self):
  342. return self._fold_encoded(folded)
  343. # Here we can have either a pure ASCII string that may or may not
  344. # have surrogateescape encoded bytes, or a unicode string.
  345. last_ew = None
  346. for part in self.parts:
  347. tstr = str(part)
  348. is_ew = False
  349. try:
  350. str(part).encode('us-ascii')
  351. except UnicodeEncodeError:
  352. if any(isinstance(x, errors.UndecodableBytesDefect)
  353. for x in part.all_defects):
  354. charset = 'unknown-8bit'
  355. else:
  356. charset = 'utf-8'
  357. if last_ew is not None:
  358. # We've already done an EW, combine this one with it
  359. # if there's room.
  360. chunk = get_unstructured(
  361. ''.join(folded.current[last_ew:]+[tstr])).as_encoded_word(charset)
  362. oldlastlen = sum(len(x) for x in folded.current[:last_ew])
  363. schunk = str(chunk)
  364. lchunk = len(schunk)
  365. if oldlastlen + lchunk <= folded.maxlen:
  366. del folded.current[last_ew:]
  367. folded.append(schunk)
  368. folded.lastlen = oldlastlen + lchunk
  369. continue
  370. tstr = part.as_encoded_word(charset)
  371. is_ew = True
  372. if folded.append_if_fits(part, tstr):
  373. if is_ew:
  374. last_ew = len(folded.current) - 1
  375. continue
  376. if is_ew or last_ew:
  377. # It's too big to fit on the line, but since we've
  378. # got encoded words we can use encoded word folding.
  379. part._fold_as_ew(folded)
  380. continue
  381. # Peel off the leading whitespace if any and make it sticky, to
  382. # avoid infinite recursion.
  383. ws = part.pop_leading_fws()
  384. if ws is not None:
  385. folded.stickyspace = str(ws)
  386. if folded.append_if_fits(part):
  387. continue
  388. if part.has_fws:
  389. part.fold(folded)
  390. continue
  391. # It can't be split...we just have to put it on its own line.
  392. folded.append(tstr)
  393. folded.newline()
  394. last_ew = None
  395. def cte_encode(self, charset, policy):
  396. res = []
  397. last_ew = None
  398. for part in self:
  399. spart = str(part)
  400. try:
  401. spart.encode('us-ascii')
  402. res.append(spart)
  403. except UnicodeEncodeError:
  404. if last_ew is None:
  405. res.append(part.cte_encode(charset, policy))
  406. last_ew = len(res)
  407. else:
  408. tl = get_unstructured(''.join(res[last_ew:] + [spart]))
  409. res.append(tl.as_encoded_word())
  410. return ''.join(res)
  411. class Phrase(TokenList):
  412. token_type = 'phrase'
  413. def _fold(self, folded):
  414. # As with Unstructured, we can have pure ASCII with or without
  415. # surrogateescape encoded bytes, or we could have unicode. But this
  416. # case is more complicated, since we have to deal with the various
  417. # sub-token types and how they can be composed in the face of
  418. # unicode-that-needs-CTE-encoding, and the fact that if a token a
  419. # comment that becomes a barrier across which we can't compose encoded
  420. # words.
  421. last_ew = None
  422. for part in self.parts:
  423. tstr = str(part)
  424. tlen = len(tstr)
  425. has_ew = False
  426. try:
  427. str(part).encode('us-ascii')
  428. except UnicodeEncodeError:
  429. if any(isinstance(x, errors.UndecodableBytesDefect)
  430. for x in part.all_defects):
  431. charset = 'unknown-8bit'
  432. else:
  433. charset = 'utf-8'
  434. if last_ew is not None and not part.has_leading_comment():
  435. # We've already done an EW, let's see if we can combine
  436. # this one with it. The last_ew logic ensures that all we
  437. # have at this point is atoms, no comments or quoted
  438. # strings. So we can treat the text between the last
  439. # encoded word and the content of this token as
  440. # unstructured text, and things will work correctly. But
  441. # we have to strip off any trailing comment on this token
  442. # first, and if it is a quoted string we have to pull out
  443. # the content (we're encoding it, so it no longer needs to
  444. # be quoted).
  445. if part[-1].token_type == 'cfws' and part.comments:
  446. remainder = part.pop(-1)
  447. else:
  448. remainder = ''
  449. for i, token in enumerate(part):
  450. if token.token_type == 'bare-quoted-string':
  451. part[i] = UnstructuredTokenList(token[:])
  452. chunk = get_unstructured(
  453. ''.join(folded.current[last_ew:]+[tstr])).as_encoded_word(charset)
  454. schunk = str(chunk)
  455. lchunk = len(schunk)
  456. if last_ew + lchunk <= folded.maxlen:
  457. del folded.current[last_ew:]
  458. folded.append(schunk)
  459. folded.lastlen = sum(len(x) for x in folded.current)
  460. continue
  461. tstr = part.as_encoded_word(charset)
  462. tlen = len(tstr)
  463. has_ew = True
  464. if folded.append_if_fits(part, tstr):
  465. if has_ew and not part.comments:
  466. last_ew = len(folded.current) - 1
  467. elif part.comments or part.token_type == 'quoted-string':
  468. # If a comment is involved we can't combine EWs. And if a
  469. # quoted string is involved, it's not worth the effort to
  470. # try to combine them.
  471. last_ew = None
  472. continue
  473. part._fold(folded)
  474. def cte_encode(self, charset, policy):
  475. res = []
  476. last_ew = None
  477. is_ew = False
  478. for part in self:
  479. spart = str(part)
  480. try:
  481. spart.encode('us-ascii')
  482. res.append(spart)
  483. except UnicodeEncodeError:
  484. is_ew = True
  485. if last_ew is None:
  486. if not part.comments:
  487. last_ew = len(res)
  488. res.append(part.cte_encode(charset, policy))
  489. elif not part.has_leading_comment():
  490. if part[-1].token_type == 'cfws' and part.comments:
  491. remainder = part.pop(-1)
  492. else:
  493. remainder = ''
  494. for i, token in enumerate(part):
  495. if token.token_type == 'bare-quoted-string':
  496. part[i] = UnstructuredTokenList(token[:])
  497. tl = get_unstructured(''.join(res[last_ew:] + [spart]))
  498. res[last_ew:] = [tl.as_encoded_word(charset)]
  499. if part.comments or (not is_ew and part.token_type == 'quoted-string'):
  500. last_ew = None
  501. return ''.join(res)
  502. class Word(TokenList):
  503. token_type = 'word'
  504. class CFWSList(WhiteSpaceTokenList):
  505. token_type = 'cfws'
  506. def has_leading_comment(self):
  507. return bool(self.comments)
  508. class Atom(TokenList):
  509. token_type = 'atom'
  510. class Token(TokenList):
  511. token_type = 'token'
  512. class EncodedWord(TokenList):
  513. token_type = 'encoded-word'
  514. cte = None
  515. charset = None
  516. lang = None
  517. @property
  518. def encoded(self):
  519. if self.cte is not None:
  520. return self.cte
  521. _ew.encode(str(self), self.charset)
  522. class QuotedString(TokenList):
  523. token_type = 'quoted-string'
  524. @property
  525. def content(self):
  526. for x in self:
  527. if x.token_type == 'bare-quoted-string':
  528. return x.value
  529. @property
  530. def quoted_value(self):
  531. res = []
  532. for x in self:
  533. if x.token_type == 'bare-quoted-string':
  534. res.append(str(x))
  535. else:
  536. res.append(x.value)
  537. return ''.join(res)
  538. @property
  539. def stripped_value(self):
  540. for token in self:
  541. if token.token_type == 'bare-quoted-string':
  542. return token.value
  543. class BareQuotedString(QuotedString):
  544. token_type = 'bare-quoted-string'
  545. def __str__(self):
  546. return quote_string(''.join(str(x) for x in self))
  547. @property
  548. def value(self):
  549. return ''.join(str(x) for x in self)
  550. class Comment(WhiteSpaceTokenList):
  551. token_type = 'comment'
  552. def __str__(self):
  553. return ''.join(sum([
  554. ["("],
  555. [self.quote(x) for x in self],
  556. [")"],
  557. ], []))
  558. def quote(self, value):
  559. if value.token_type == 'comment':
  560. return str(value)
  561. return str(value).replace('\\', '\\\\').replace(
  562. '(', '\(').replace(
  563. ')', '\)')
  564. @property
  565. def content(self):
  566. return ''.join(str(x) for x in self)
  567. @property
  568. def comments(self):
  569. return [self.content]
  570. class AddressList(TokenList):
  571. token_type = 'address-list'
  572. @property
  573. def addresses(self):
  574. return [x for x in self if x.token_type=='address']
  575. @property
  576. def mailboxes(self):
  577. return sum((x.mailboxes
  578. for x in self if x.token_type=='address'), [])
  579. @property
  580. def all_mailboxes(self):
  581. return sum((x.all_mailboxes
  582. for x in self if x.token_type=='address'), [])
  583. class Address(TokenList):
  584. token_type = 'address'
  585. @property
  586. def display_name(self):
  587. if self[0].token_type == 'group':
  588. return self[0].display_name
  589. @property
  590. def mailboxes(self):
  591. if self[0].token_type == 'mailbox':
  592. return [self[0]]
  593. elif self[0].token_type == 'invalid-mailbox':
  594. return []
  595. return self[0].mailboxes
  596. @property
  597. def all_mailboxes(self):
  598. if self[0].token_type == 'mailbox':
  599. return [self[0]]
  600. elif self[0].token_type == 'invalid-mailbox':
  601. return [self[0]]
  602. return self[0].all_mailboxes
  603. class MailboxList(TokenList):
  604. token_type = 'mailbox-list'
  605. @property
  606. def mailboxes(self):
  607. return [x for x in self if x.token_type=='mailbox']
  608. @property
  609. def all_mailboxes(self):
  610. return [x for x in self
  611. if x.token_type in ('mailbox', 'invalid-mailbox')]
  612. class GroupList(TokenList):
  613. token_type = 'group-list'
  614. @property
  615. def mailboxes(self):
  616. if not self or self[0].token_type != 'mailbox-list':
  617. return []
  618. return self[0].mailboxes
  619. @property
  620. def all_mailboxes(self):
  621. if not self or self[0].token_type != 'mailbox-list':
  622. return []
  623. return self[0].all_mailboxes
  624. class Group(TokenList):
  625. token_type = "group"
  626. @property
  627. def mailboxes(self):
  628. if self[2].token_type != 'group-list':
  629. return []
  630. return self[2].mailboxes
  631. @property
  632. def all_mailboxes(self):
  633. if self[2].token_type != 'group-list':
  634. return []
  635. return self[2].all_mailboxes
  636. @property
  637. def display_name(self):
  638. return self[0].display_name
  639. class NameAddr(TokenList):
  640. token_type = 'name-addr'
  641. @property
  642. def display_name(self):
  643. if len(self) == 1:
  644. return None
  645. return self[0].display_name
  646. @property
  647. def local_part(self):
  648. return self[-1].local_part
  649. @property
  650. def domain(self):
  651. return self[-1].domain
  652. @property
  653. def route(self):
  654. return self[-1].route
  655. @property
  656. def addr_spec(self):
  657. return self[-1].addr_spec
  658. class AngleAddr(TokenList):
  659. token_type = 'angle-addr'
  660. @property
  661. def local_part(self):
  662. for x in self:
  663. if x.token_type == 'addr-spec':
  664. return x.local_part
  665. @property
  666. def domain(self):
  667. for x in self:
  668. if x.token_type == 'addr-spec':
  669. return x.domain
  670. @property
  671. def route(self):
  672. for x in self:
  673. if x.token_type == 'obs-route':
  674. return x.domains
  675. @property
  676. def addr_spec(self):
  677. for x in self:
  678. if x.token_type == 'addr-spec':
  679. return x.addr_spec
  680. else:
  681. return '<>'
  682. class ObsRoute(TokenList):
  683. token_type = 'obs-route'
  684. @property
  685. def domains(self):
  686. return [x.domain for x in self if x.token_type == 'domain']
  687. class Mailbox(TokenList):
  688. token_type = 'mailbox'
  689. @property
  690. def display_name(self):
  691. if self[0].token_type == 'name-addr':
  692. return self[0].display_name
  693. @property
  694. def local_part(self):
  695. return self[0].local_part
  696. @property
  697. def domain(self):
  698. return self[0].domain
  699. @property
  700. def route(self):
  701. if self[0].token_type == 'name-addr':
  702. return self[0].route
  703. @property
  704. def addr_spec(self):
  705. return self[0].addr_spec
  706. class InvalidMailbox(TokenList):
  707. token_type = 'invalid-mailbox'
  708. @property
  709. def display_name(self):
  710. return None
  711. local_part = domain = route = addr_spec = display_name
  712. class Domain(TokenList):
  713. token_type = 'domain'
  714. @property
  715. def domain(self):
  716. return ''.join(super().value.split())
  717. class DotAtom(TokenList):
  718. token_type = 'dot-atom'
  719. class DotAtomText(TokenList):
  720. token_type = 'dot-atom-text'
  721. class AddrSpec(TokenList):
  722. token_type = 'addr-spec'
  723. @property
  724. def local_part(self):
  725. return self[0].local_part
  726. @property
  727. def domain(self):
  728. if len(self) < 3:
  729. return None
  730. return self[-1].domain
  731. @property
  732. def value(self):
  733. if len(self) < 3:
  734. return self[0].value
  735. return self[0].value.rstrip()+self[1].value+self[2].value.lstrip()
  736. @property
  737. def addr_spec(self):
  738. nameset = set(self.local_part)
  739. if len(nameset) > len(nameset-DOT_ATOM_ENDS):
  740. lp = quote_string(self.local_part)
  741. else:
  742. lp = self.local_part
  743. if self.domain is not None:
  744. return lp + '@' + self.domain
  745. return lp
  746. class ObsLocalPart(TokenList):
  747. token_type = 'obs-local-part'
  748. class DisplayName(Phrase):
  749. token_type = 'display-name'
  750. @property
  751. def display_name(self):
  752. res = TokenList(self)
  753. if res[0].token_type == 'cfws':
  754. res.pop(0)
  755. else:
  756. if res[0][0].token_type == 'cfws':
  757. res[0] = TokenList(res[0][1:])
  758. if res[-1].token_type == 'cfws':
  759. res.pop()
  760. else:
  761. if res[-1][-1].token_type == 'cfws':
  762. res[-1] = TokenList(res[-1][:-1])
  763. return res.value
  764. @property
  765. def value(self):
  766. quote = False
  767. if self.defects:
  768. quote = True
  769. else:
  770. for x in self:
  771. if x.token_type == 'quoted-string':
  772. quote = True
  773. if quote:
  774. pre = post = ''
  775. if self[0].token_type=='cfws' or self[0][0].token_type=='cfws':
  776. pre = ' '
  777. if self[-1].token_type=='cfws' or self[-1][-1].token_type=='cfws':
  778. post = ' '
  779. return pre+quote_string(self.display_name)+post
  780. else:
  781. return super().value
  782. class LocalPart(TokenList):
  783. token_type = 'local-part'
  784. @property
  785. def value(self):
  786. if self[0].token_type == "quoted-string":
  787. return self[0].quoted_value
  788. else:
  789. return self[0].value
  790. @property
  791. def local_part(self):
  792. # Strip whitespace from front, back, and around dots.
  793. res = [DOT]
  794. last = DOT
  795. last_is_tl = False
  796. for tok in self[0] + [DOT]:
  797. if tok.token_type == 'cfws':
  798. continue
  799. if (last_is_tl and tok.token_type == 'dot' and
  800. last[-1].token_type == 'cfws'):
  801. res[-1] = TokenList(last[:-1])
  802. is_tl = isinstance(tok, TokenList)
  803. if (is_tl and last.token_type == 'dot' and
  804. tok[0].token_type == 'cfws'):
  805. res.append(TokenList(tok[1:]))
  806. else:
  807. res.append(tok)
  808. last = res[-1]
  809. last_is_tl = is_tl
  810. res = TokenList(res[1:-1])
  811. return res.value
  812. class DomainLiteral(TokenList):
  813. token_type = 'domain-literal'
  814. @property
  815. def domain(self):
  816. return ''.join(super().value.split())
  817. @property
  818. def ip(self):
  819. for x in self:
  820. if x.token_type == 'ptext':
  821. return x.value
  822. class MIMEVersion(TokenList):
  823. token_type = 'mime-version'
  824. major = None
  825. minor = None
  826. class Parameter(TokenList):
  827. token_type = 'parameter'
  828. sectioned = False
  829. extended = False
  830. charset = 'us-ascii'
  831. @property
  832. def section_number(self):
  833. # Because the first token, the attribute (name) eats CFWS, the second
  834. # token is always the section if there is one.
  835. return self[1].number if self.sectioned else 0
  836. @property
  837. def param_value(self):
  838. # This is part of the "handle quoted extended parameters" hack.
  839. for token in self:
  840. if token.token_type == 'value':
  841. return token.stripped_value
  842. if token.token_type == 'quoted-string':
  843. for token in token:
  844. if token.token_type == 'bare-quoted-string':
  845. for token in token:
  846. if token.token_type == 'value':
  847. return token.stripped_value
  848. return ''
  849. class InvalidParameter(Parameter):
  850. token_type = 'invalid-parameter'
  851. class Attribute(TokenList):
  852. token_type = 'attribute'
  853. @property
  854. def stripped_value(self):
  855. for token in self:
  856. if token.token_type.endswith('attrtext'):
  857. return token.value
  858. class Section(TokenList):
  859. token_type = 'section'
  860. number = None
  861. class Value(TokenList):
  862. token_type = 'value'
  863. @property
  864. def stripped_value(self):
  865. token = self[0]
  866. if token.token_type == 'cfws':
  867. token = self[1]
  868. if token.token_type.endswith(
  869. ('quoted-string', 'attribute', 'extended-attribute')):
  870. return token.stripped_value
  871. return self.value
  872. class MimeParameters(TokenList):
  873. token_type = 'mime-parameters'
  874. @property
  875. def params(self):
  876. # The RFC specifically states that the ordering of parameters is not
  877. # guaranteed and may be reordered by the transport layer. So we have
  878. # to assume the RFC 2231 pieces can come in any order. However, we
  879. # output them in the order that we first see a given name, which gives
  880. # us a stable __str__.
  881. params = OrderedDict()
  882. for token in self:
  883. if not token.token_type.endswith('parameter'):
  884. continue
  885. if token[0].token_type != 'attribute':
  886. continue
  887. name = token[0].value.strip()
  888. if name not in params:
  889. params[name] = []
  890. params[name].append((token.section_number, token))
  891. for name, parts in params.items():
  892. parts = sorted(parts)
  893. # XXX: there might be more recovery we could do here if, for
  894. # example, this is really a case of a duplicate attribute name.
  895. value_parts = []
  896. charset = parts[0][1].charset
  897. for i, (section_number, param) in enumerate(parts):
  898. if section_number != i:
  899. param.defects.append(errors.InvalidHeaderDefect(
  900. "inconsistent multipart parameter numbering"))
  901. value = param.param_value
  902. if param.extended:
  903. try:
  904. value = urllib.parse.unquote_to_bytes(value)
  905. except UnicodeEncodeError:
  906. # source had surrogate escaped bytes. What we do now
  907. # is a bit of an open question. I'm not sure this is
  908. # the best choice, but it is what the old algorithm did
  909. value = urllib.parse.unquote(value, encoding='latin-1')
  910. else:
  911. try:
  912. value = value.decode(charset, 'surrogateescape')
  913. except LookupError:
  914. # XXX: there should really be a custom defect for
  915. # unknown character set to make it easy to find,
  916. # because otherwise unknown charset is a silent
  917. # failure.
  918. value = value.decode('us-ascii', 'surrogateescape')
  919. if utils._has_surrogates(value):
  920. param.defects.append(errors.UndecodableBytesDefect())
  921. value_parts.append(value)
  922. value = ''.join(value_parts)
  923. yield name, value
  924. def __str__(self):
  925. params = []
  926. for name, value in self.params:
  927. if value:
  928. params.append('{}={}'.format(name, quote_string(value)))
  929. else:
  930. params.append(name)
  931. params = '; '.join(params)
  932. return ' ' + params if params else ''
  933. class ParameterizedHeaderValue(TokenList):
  934. @property
  935. def params(self):
  936. for token in reversed(self):
  937. if token.token_type == 'mime-parameters':
  938. return token.params
  939. return {}
  940. @property
  941. def parts(self):
  942. if self and self[-1].token_type == 'mime-parameters':
  943. # We don't want to start a new line if all of the params don't fit
  944. # after the value, so unwrap the parameter list.
  945. return TokenList(self[:-1] + self[-1])
  946. return TokenList(self).parts
  947. class ContentType(ParameterizedHeaderValue):
  948. token_type = 'content-type'
  949. maintype = 'text'
  950. subtype = 'plain'
  951. class ContentDisposition(ParameterizedHeaderValue):
  952. token_type = 'content-disposition'
  953. content_disposition = None
  954. class ContentTransferEncoding(TokenList):
  955. token_type = 'content-transfer-encoding'
  956. cte = '7bit'
  957. class HeaderLabel(TokenList):
  958. token_type = 'header-label'
  959. class Header(TokenList):
  960. token_type = 'header'
  961. def _fold(self, folded):
  962. folded.append(str(self.pop(0)))
  963. folded.lastlen = len(folded.current[0])
  964. # The first line of the header is different from all others: we don't
  965. # want to start a new object on a new line if it has any fold points in
  966. # it that would allow part of it to be on the first header line.
  967. # Further, if the first fold point would fit on the new line, we want
  968. # to do that, but if it doesn't we want to put it on the first line.
  969. # Folded supports this via the stickyspace attribute. If this
  970. # attribute is not None, it does the special handling.
  971. folded.stickyspace = str(self.pop(0)) if self[0].token_type == 'cfws' else ''
  972. rest = self.pop(0)
  973. if self:
  974. raise ValueError("Malformed Header token list")
  975. rest._fold(folded)
  976. #
  977. # Terminal classes and instances
  978. #
  979. class Terminal(str):
  980. def __new__(cls, value, token_type):
  981. self = super().__new__(cls, value)
  982. self.token_type = token_type
  983. self.defects = []
  984. return self
  985. def __repr__(self):
  986. return "{}({})".format(self.__class__.__name__, super().__repr__())
  987. @property
  988. def all_defects(self):
  989. return list(self.defects)
  990. def _pp(self, indent=''):
  991. return ["{}{}/{}({}){}".format(
  992. indent,
  993. self.__class__.__name__,
  994. self.token_type,
  995. super().__repr__(),
  996. '' if not self.defects else ' {}'.format(self.defects),
  997. )]
  998. def cte_encode(self, charset, policy):
  999. value = str(self)
  1000. try:
  1001. value.encode('us-ascii')
  1002. return value
  1003. except UnicodeEncodeError:
  1004. return _ew.encode(value, charset)
  1005. def pop_trailing_ws(self):
  1006. # This terminates the recursion.
  1007. return None
  1008. def pop_leading_fws(self):
  1009. # This terminates the recursion.
  1010. return None
  1011. @property
  1012. def comments(self):
  1013. return []
  1014. def has_leading_comment(self):
  1015. return False
  1016. def __getnewargs__(self):
  1017. return(str(self), self.token_type)
  1018. class WhiteSpaceTerminal(Terminal):
  1019. @property
  1020. def value(self):
  1021. return ' '
  1022. def startswith_fws(self):
  1023. return True
  1024. has_fws = True
  1025. class ValueTerminal(Terminal):
  1026. @property
  1027. def value(self):
  1028. return self
  1029. def startswith_fws(self):
  1030. return False
  1031. has_fws = False
  1032. def as_encoded_word(self, charset):
  1033. return _ew.encode(str(self), charset)
  1034. class EWWhiteSpaceTerminal(WhiteSpaceTerminal):
  1035. @property
  1036. def value(self):
  1037. return ''
  1038. @property
  1039. def encoded(self):
  1040. return self[:]
  1041. def __str__(self):
  1042. return ''
  1043. has_fws = True
  1044. # XXX these need to become classes and used as instances so
  1045. # that a program can't change them in a parse tree and screw
  1046. # up other parse trees. Maybe should have tests for that, too.
  1047. DOT = ValueTerminal('.', 'dot')
  1048. ListSeparator = ValueTerminal(',', 'list-separator')
  1049. RouteComponentMarker = ValueTerminal('@', 'route-component-marker')
  1050. #
  1051. # Parser
  1052. #
  1053. """Parse strings according to RFC822/2047/2822/5322 rules.
  1054. This is a stateless parser. Each get_XXX function accepts a string and
  1055. returns either a Terminal or a TokenList representing the RFC object named
  1056. by the method and a string containing the remaining unparsed characters
  1057. from the input. Thus a parser method consumes the next syntactic construct
  1058. of a given type and returns a token representing the construct plus the
  1059. unparsed remainder of the input string.
  1060. For example, if the first element of a structured header is a 'phrase',
  1061. then:
  1062. phrase, value = get_phrase(value)
  1063. returns the complete phrase from the start of the string value, plus any
  1064. characters left in the string after the phrase is removed.
  1065. """
  1066. _wsp_splitter = re.compile(r'([{}]+)'.format(''.join(WSP))).split
  1067. _non_atom_end_matcher = re.compile(r"[^{}]+".format(
  1068. ''.join(ATOM_ENDS).replace('\\','\\\\').replace(']','\]'))).match
  1069. _non_printable_finder = re.compile(r"[\x00-\x20\x7F]").findall
  1070. _non_token_end_matcher = re.compile(r"[^{}]+".format(
  1071. ''.join(TOKEN_ENDS).replace('\\','\\\\').replace(']','\]'))).match
  1072. _non_attribute_end_matcher = re.compile(r"[^{}]+".format(
  1073. ''.join(ATTRIBUTE_ENDS).replace('\\','\\\\').replace(']','\]'))).match
  1074. _non_extended_attribute_end_matcher = re.compile(r"[^{}]+".format(
  1075. ''.join(EXTENDED_ATTRIBUTE_ENDS).replace(
  1076. '\\','\\\\').replace(']','\]'))).match
  1077. def _validate_xtext(xtext):
  1078. """If input token contains ASCII non-printables, register a defect."""
  1079. non_printables = _non_printable_finder(xtext)
  1080. if non_printables:
  1081. xtext.defects.append(errors.NonPrintableDefect(non_printables))
  1082. if utils._has_surrogates(xtext):
  1083. xtext.defects.append(errors.UndecodableBytesDefect(
  1084. "Non-ASCII characters found in header token"))
  1085. def _get_ptext_to_endchars(value, endchars):
  1086. """Scan printables/quoted-pairs until endchars and return unquoted ptext.
  1087. This function turns a run of qcontent, ccontent-without-comments, or
  1088. dtext-with-quoted-printables into a single string by unquoting any
  1089. quoted printables. It returns the string, the remaining value, and
  1090. a flag that is True iff there were any quoted printables decoded.
  1091. """
  1092. fragment, *remainder = _wsp_splitter(value, 1)
  1093. vchars = []
  1094. escape = False
  1095. had_qp = False
  1096. for pos in range(len(fragment)):
  1097. if fragment[pos] == '\\':
  1098. if escape:
  1099. escape = False
  1100. had_qp = True
  1101. else:
  1102. escape = True
  1103. continue
  1104. if escape:
  1105. escape = False
  1106. elif fragment[pos] in endchars:
  1107. break
  1108. vchars.append(fragment[pos])
  1109. else:
  1110. pos = pos + 1
  1111. return ''.join(vchars), ''.join([fragment[pos:]] + remainder), had_qp
  1112. def _decode_ew_run(value):
  1113. """ Decode a run of RFC2047 encoded words.
  1114. _decode_ew_run(value) -> (text, value, defects)
  1115. Scans the supplied value for a run of tokens that look like they are RFC
  1116. 2047 encoded words, decodes those words into text according to RFC 2047
  1117. rules (whitespace between encoded words is discarded), and returns the text
  1118. and the remaining value (including any leading whitespace on the remaining
  1119. value), as well as a list of any defects encountered while decoding. The
  1120. input value may not have any leading whitespace.
  1121. """
  1122. res = []
  1123. defects = []
  1124. last_ws = ''
  1125. while value:
  1126. try:
  1127. tok, ws, value = _wsp_splitter(value, 1)
  1128. except ValueError:
  1129. tok, ws, value = value, '', ''
  1130. if not (tok.startswith('=?') and tok.endswith('?=')):
  1131. return ''.join(res), last_ws + tok + ws + value, defects
  1132. text, charset, lang, new_defects = _ew.decode(tok)
  1133. res.append(text)
  1134. defects.extend(new_defects)
  1135. last_ws = ws
  1136. return ''.join(res), last_ws, defects
  1137. def get_fws(value):
  1138. """FWS = 1*WSP
  1139. This isn't the RFC definition. We're using fws to represent tokens where
  1140. folding can be done, but when we are parsing the *un*folding has already
  1141. been done so we don't need to watch out for CRLF.
  1142. """
  1143. newvalue = value.lstrip()
  1144. fws = WhiteSpaceTerminal(value[:len(value)-len(newvalue)], 'fws')
  1145. return fws, newvalue
  1146. def get_encoded_word(value):
  1147. """ encoded-word = "=?" charset "?" encoding "?" encoded-text "?="
  1148. """
  1149. ew = EncodedWord()
  1150. if not value.startswith('=?'):
  1151. raise errors.HeaderParseError(
  1152. "expected encoded word but found {}".format(value))
  1153. tok, *remainder = value[2:].split('?=', 1)
  1154. if tok == value[2:]:
  1155. raise errors.HeaderParseError(
  1156. "expected encoded word but found {}".format(value))
  1157. remstr = ''.join(remainder)
  1158. if remstr[:2].isdigit():
  1159. rest, *remainder = remstr.split('?=', 1)
  1160. tok = tok + '?=' + rest
  1161. if len(tok.split()) > 1:
  1162. ew.defects.append(errors.InvalidHeaderDefect(
  1163. "whitespace inside encoded word"))
  1164. ew.cte = value
  1165. value = ''.join(remainder)
  1166. try:
  1167. text, charset, lang, defects = _ew.decode('=?' + tok + '?=')
  1168. except ValueError:
  1169. raise errors.HeaderParseError(
  1170. "encoded word format invalid: '{}'".format(ew.cte))
  1171. ew.charset = charset
  1172. ew.lang = lang
  1173. ew.defects.extend(defects)
  1174. while text:
  1175. if text[0] in WSP:
  1176. token, text = get_fws(text)
  1177. ew.append(token)
  1178. continue
  1179. chars, *remainder = _wsp_splitter(text, 1)
  1180. vtext = ValueTerminal(chars, 'vtext')
  1181. _validate_xtext(vtext)
  1182. ew.append(vtext)
  1183. text = ''.join(remainder)
  1184. return ew, value
  1185. def get_unstructured(value):
  1186. """unstructured = (*([FWS] vchar) *WSP) / obs-unstruct
  1187. obs-unstruct = *((*LF *CR *(obs-utext) *LF *CR)) / FWS)
  1188. obs-utext = %d0 / obs-NO-WS-CTL / LF / CR
  1189. obs-NO-WS-CTL is control characters except WSP/CR/LF.
  1190. So, basically, we have printable runs, plus control characters or nulls in
  1191. the obsolete syntax, separated by whitespace. Since RFC 2047 uses the
  1192. obsolete syntax in its specification, but requires whitespace on either
  1193. side of the encoded words, I can see no reason to need to separate the
  1194. non-printable-non-whitespace from the printable runs if they occur, so we
  1195. parse this into xtext tokens separated by WSP tokens.
  1196. Because an 'unstructured' value must by definition constitute the entire
  1197. value, this 'get' routine does not return a remaining value, only the
  1198. parsed TokenList.
  1199. """
  1200. # XXX: but what about bare CR and LF? They might signal the start or
  1201. # end of an encoded word. YAGNI for now, since out current parsers
  1202. # will never send us strings with bard CR or LF.
  1203. unstructured = UnstructuredTokenList()
  1204. while value:
  1205. if value[0] in WSP:
  1206. token, value = get_fws(value)
  1207. unstructured.append(token)
  1208. continue
  1209. if value.startswith('=?'):
  1210. try:
  1211. token, value = get_encoded_word(value)
  1212. except errors.HeaderParseError:
  1213. pass
  1214. else:
  1215. have_ws = True
  1216. if len(unstructured) > 0:
  1217. if unstructured[-1].token_type != 'fws':
  1218. unstructured.defects.append(errors.InvalidHeaderDefect(
  1219. "missing whitespace before encoded word"))
  1220. have_ws = False
  1221. if have_ws and len(unstructured) > 1:
  1222. if unstructured[-2].token_type == 'encoded-word':
  1223. unstructured[-1] = EWWhiteSpaceTerminal(
  1224. unstructured[-1], 'fws')
  1225. unstructured.append(token)
  1226. continue
  1227. tok, *remainder = _wsp_splitter(value, 1)
  1228. vtext = ValueTerminal(tok, 'vtext')
  1229. _validate_xtext(vtext)
  1230. unstructured.append(vtext)
  1231. value = ''.join(remainder)
  1232. return unstructured
  1233. def get_qp_ctext(value):
  1234. """ctext = <printable ascii except \ ( )>
  1235. This is not the RFC ctext, since we are handling nested comments in comment
  1236. and unquoting quoted-pairs here. We allow anything except the '()'
  1237. characters, but if we find any ASCII other than the RFC defined printable
  1238. ASCII an NonPrintableDefect is added to the token's defects list. Since
  1239. quoted pairs are converted to their unquoted values, what is returned is
  1240. a 'ptext' token. In this case it is a WhiteSpaceTerminal, so it's value
  1241. is ' '.
  1242. """
  1243. ptext, value, _ = _get_ptext_to_endchars(value, '()')
  1244. ptext = WhiteSpaceTerminal(ptext, 'ptext')
  1245. _validate_xtext(ptext)
  1246. return ptext, value
  1247. def get_qcontent(value):
  1248. """qcontent = qtext / quoted-pair
  1249. We allow anything except the DQUOTE character, but if we find any ASCII
  1250. other than the RFC defined printable ASCII an NonPrintableDefect is
  1251. added to the token's defects list. Any quoted pairs are converted to their
  1252. unquoted values, so what is returned is a 'ptext' token. In this case it
  1253. is a ValueTerminal.
  1254. """
  1255. ptext, value, _ = _get_ptext_to_endchars(value, '"')
  1256. ptext = ValueTerminal(ptext, 'ptext')
  1257. _validate_xtext(ptext)
  1258. return ptext, value
  1259. def get_atext(value):
  1260. """atext = <matches _atext_matcher>
  1261. We allow any non-ATOM_ENDS in atext, but add an InvalidATextDefect to
  1262. the token's defects list if we find non-atext characters.
  1263. """
  1264. m = _non_atom_end_matcher(value)
  1265. if not m:
  1266. raise errors.HeaderParseError(
  1267. "expected atext but found '{}'".format(value))
  1268. atext = m.group()
  1269. value = value[len(atext):]
  1270. atext = ValueTerminal(atext, 'atext')
  1271. _validate_xtext(atext)
  1272. return atext, value
  1273. def get_bare_quoted_string(value):
  1274. """bare-quoted-string = DQUOTE *([FWS] qcontent) [FWS] DQUOTE
  1275. A quoted-string without the leading or trailing white space. Its
  1276. value is the text between the quote marks, with whitespace
  1277. preserved and quoted pairs decoded.
  1278. """
  1279. if value[0] != '"':
  1280. raise errors.HeaderParseError(
  1281. "expected '\"' but found '{}'".format(value))
  1282. bare_quoted_string = BareQuotedString()
  1283. value = value[1:]
  1284. while value and value[0] != '"':
  1285. if value[0] in WSP:
  1286. token, value = get_fws(value)
  1287. else:
  1288. token, value = get_qcontent(value)
  1289. bare_quoted_string.append(token)
  1290. if not value:
  1291. bare_quoted_string.defects.append(errors.InvalidHeaderDefect(
  1292. "end of header inside quoted string"))
  1293. return bare_quoted_string, value
  1294. return bare_quoted_string, value[1:]
  1295. def get_comment(value):
  1296. """comment = "(" *([FWS] ccontent) [FWS] ")"
  1297. ccontent = ctext / quoted-pair / comment
  1298. We handle nested comments here, and quoted-pair in our qp-ctext routine.
  1299. """
  1300. if value and value[0] != '(':
  1301. raise errors.HeaderParseError(
  1302. "expected '(' but found '{}'".format(value))
  1303. comment = Comment()
  1304. value = value[1:]
  1305. while value and value[0] != ")":
  1306. if value[0] in WSP:
  1307. token, value = get_fws(value)
  1308. elif value[0] == '(':
  1309. token, value = get_comment(value)
  1310. else:
  1311. token, value = get_qp_ctext(value)
  1312. comment.append(token)
  1313. if not value:
  1314. comment.defects.append(errors.InvalidHeaderDefect(
  1315. "end of header inside comment"))
  1316. return comment, value
  1317. return comment, value[1:]
  1318. def get_cfws(value):
  1319. """CFWS = (1*([FWS] comment) [FWS]) / FWS
  1320. """
  1321. cfws = CFWSList()
  1322. while value and value[0] in CFWS_LEADER:
  1323. if value[0] in WSP:
  1324. token, value = get_fws(value)
  1325. else:
  1326. token, value = get_comment(value)
  1327. cfws.append(token)
  1328. return cfws, value
  1329. def get_quoted_string(value):
  1330. """quoted-string = [CFWS] <bare-quoted-string> [CFWS]
  1331. 'bare-quoted-string' is an intermediate class defined by this
  1332. parser and not by the RFC grammar. It is the quoted string
  1333. without any attached CFWS.
  1334. """
  1335. quoted_string = QuotedString()
  1336. if value and value[0] in CFWS_LEADER:
  1337. token, value = get_cfws(value)
  1338. quoted_string.append(token)
  1339. token, value = get_bare_quoted_string(value)
  1340. quoted_string.append(token)
  1341. if value and value[0] in CFWS_LEADER:
  1342. token, value = get_cfws(value)
  1343. quoted_string.append(token)
  1344. return quoted_string, value
  1345. def get_atom(value):
  1346. """atom = [CFWS] 1*atext [CFWS]
  1347. """
  1348. atom = Atom()
  1349. if value and value[0] in CFWS_LEADER:
  1350. token, value = get_cfws(value)
  1351. atom.append(token)
  1352. if value and value[0] in ATOM_ENDS:
  1353. raise errors.HeaderParseError(
  1354. "expected atom but found '{}'".format(value))
  1355. token, value = get_atext(value)
  1356. atom.append(token)
  1357. if value and value[0] in CFWS_LEADER:
  1358. token, value = get_cfws(value)
  1359. atom.append(token)
  1360. return atom, value
  1361. def get_dot_atom_text(value):
  1362. """ dot-text = 1*atext *("." 1*atext)
  1363. """
  1364. dot_atom_text = DotAtomText()
  1365. if not value or value[0] in ATOM_ENDS:
  1366. raise errors.HeaderParseError("expected atom at a start of "
  1367. "dot-atom-text but found '{}'".format(value))
  1368. while value and value[0] not in ATOM_ENDS:
  1369. token, value = get_atext(value)
  1370. dot_atom_text.append(token)
  1371. if value and value[0] == '.':
  1372. dot_atom_text.append(DOT)
  1373. value = value[1:]
  1374. if dot_atom_text[-1] is DOT:
  1375. raise errors.HeaderParseError("expected atom at end of dot-atom-text "
  1376. "but found '{}'".format('.'+value))
  1377. return dot_atom_text, value
  1378. def get_dot_atom(value):
  1379. """ dot-atom = [CFWS] dot-atom-text [CFWS]
  1380. """
  1381. dot_atom = DotAtom()
  1382. if value[0] in CFWS_LEADER:
  1383. token, value = get_cfws(value)
  1384. dot_atom.append(token)
  1385. token, value = get_dot_atom_text(value)
  1386. dot_atom.append(token)
  1387. if value and value[0] in CFWS_LEADER:
  1388. token, value = get_cfws(value)
  1389. dot_atom.append(token)
  1390. return dot_atom, value
  1391. def get_word(value):
  1392. """word = atom / quoted-string
  1393. Either atom or quoted-string may start with CFWS. We have to peel off this
  1394. CFWS first to determine which type of word to parse. Afterward we splice
  1395. the leading CFWS, if any, into the parsed sub-token.
  1396. If neither an atom or a quoted-string is found before the next special, a
  1397. HeaderParseError is raised.
  1398. The token returned is either an Atom or a QuotedString, as appropriate.
  1399. This means the 'word' level of the formal grammar is not represented in the
  1400. parse tree; this is because having that extra layer when manipulating the
  1401. parse tree is more confusing than it is helpful.
  1402. """
  1403. if value[0] in CFWS_LEADER:
  1404. leader, value = get_cfws(value)
  1405. else:
  1406. leader = None
  1407. if value[0]=='"':
  1408. token, value = get_quoted_string(value)
  1409. elif value[0] in SPECIALS:
  1410. raise errors.HeaderParseError("Expected 'atom' or 'quoted-string' "
  1411. "but found '{}'".format(value))
  1412. else:
  1413. token, value = get_atom(value)
  1414. if leader is not None:
  1415. token[:0] = [leader]
  1416. return token, value
  1417. def get_phrase(value):
  1418. """ phrase = 1*word / obs-phrase
  1419. obs-phrase = word *(word / "." / CFWS)
  1420. This means a phrase can be a sequence of words, periods, and CFWS in any
  1421. order as long as it starts with at least one word. If anything other than
  1422. words is detected, an ObsoleteHeaderDefect is added to the token's defect
  1423. list. We also accept a phrase that starts with CFWS followed by a dot;
  1424. this is registered as an InvalidHeaderDefect, since it is not supported by
  1425. even the obsolete grammar.
  1426. """
  1427. phrase = Phrase()
  1428. try:
  1429. token, value = get_word(value)
  1430. phrase.append(token)
  1431. except errors.HeaderParseError:
  1432. phrase.defects.append(errors.InvalidHeaderDefect(
  1433. "phrase does not start with word"))
  1434. while value and value[0] not in PHRASE_ENDS:
  1435. if value[0]=='.':
  1436. phrase.append(DOT)
  1437. phrase.defects.append(errors.ObsoleteHeaderDefect(
  1438. "period in 'phrase'"))
  1439. value = value[1:]
  1440. else:
  1441. try:
  1442. token, value = get_word(value)
  1443. except errors.HeaderParseError:
  1444. if value[0] in CFWS_LEADER:
  1445. token, value = get_cfws(value)
  1446. phrase.defects.append(errors.ObsoleteHeaderDefect(
  1447. "comment found without atom"))
  1448. else:
  1449. raise
  1450. phrase.append(token)
  1451. return phrase, value
  1452. def get_local_part(value):
  1453. """ local-part = dot-atom / quoted-string / obs-local-part
  1454. """
  1455. local_part = LocalPart()
  1456. leader = None
  1457. if value[0] in CFWS_LEADER:
  1458. leader, value = get_cfws(value)
  1459. if not value:
  1460. raise errors.HeaderParseError(
  1461. "expected local-part but found '{}'".format(value))
  1462. try:
  1463. token, value = get_dot_atom(value)
  1464. except errors.HeaderParseError:
  1465. try:
  1466. token, value = get_word(value)
  1467. except errors.HeaderParseError:
  1468. if value[0] != '\\' and value[0] in PHRASE_ENDS:
  1469. raise
  1470. token = TokenList()
  1471. if leader is not None:
  1472. token[:0] = [leader]
  1473. local_part.append(token)
  1474. if value and (value[0]=='\\' or value[0] not in PHRASE_ENDS):
  1475. obs_local_part, value = get_obs_local_part(str(local_part) + value)
  1476. if obs_local_part.token_type == 'invalid-obs-local-part':
  1477. local_part.defects.append(errors.InvalidHeaderDefect(
  1478. "local-part is not dot-atom, quoted-string, or obs-local-part"))
  1479. else:
  1480. local_part.defects.append(errors.ObsoleteHeaderDefect(
  1481. "local-part is not a dot-atom (contains CFWS)"))
  1482. local_part[0] = obs_local_part
  1483. try:
  1484. local_part.value.encode('ascii')
  1485. except UnicodeEncodeError:
  1486. local_part.defects.append(errors.NonASCIILocalPartDefect(
  1487. "local-part contains non-ASCII characters)"))
  1488. return local_part, value
  1489. def get_obs_local_part(value):
  1490. """ obs-local-part = word *("." word)
  1491. """
  1492. obs_local_part = ObsLocalPart()
  1493. last_non_ws_was_dot = False
  1494. while value and (value[0]=='\\' or value[0] not in PHRASE_ENDS):
  1495. if value[0] == '.':
  1496. if last_non_ws_was_dot:
  1497. obs_local_part.defects.append(errors.InvalidHeaderDefect(
  1498. "invalid repeated '.'"))
  1499. obs_local_part.append(DOT)
  1500. last_non_ws_was_dot = True
  1501. value = value[1:]
  1502. continue
  1503. elif value[0]=='\\':
  1504. obs_local_part.append(ValueTerminal(value[0],
  1505. 'misplaced-special'))
  1506. value = value[1:]
  1507. obs_local_part.defects.append(errors.InvalidHeaderDefect(
  1508. "'\\' character outside of quoted-string/ccontent"))
  1509. last_non_ws_was_dot = False
  1510. continue
  1511. if obs_local_part and obs_local_part[-1].token_type != 'dot':
  1512. obs_local_part.defects.append(errors.InvalidHeaderDefect(
  1513. "missing '.' between words"))
  1514. try:
  1515. token, value = get_word(value)
  1516. last_non_ws_was_dot = False
  1517. except errors.HeaderParseError:
  1518. if value[0] not in CFWS_LEADER:
  1519. raise
  1520. token, value = get_cfws(value)
  1521. obs_local_part.append(token)
  1522. if (obs_local_part[0].token_type == 'dot' or
  1523. obs_local_part[0].token_type=='cfws' and
  1524. obs_local_part[1].token_type=='dot'):
  1525. obs_local_part.defects.append(errors.InvalidHeaderDefect(
  1526. "Invalid leading '.' in local part"))
  1527. if (obs_local_part[-1].token_type == 'dot' or
  1528. obs_local_part[-1].token_type=='cfws' and
  1529. obs_local_part[-2].token_type=='dot'):
  1530. obs_local_part.defects.append(errors.InvalidHeaderDefect(
  1531. "Invalid trailing '.' in local part"))
  1532. if obs_local_part.defects:
  1533. obs_local_part.token_type = 'invalid-obs-local-part'
  1534. return obs_local_part, value
  1535. def get_dtext(value):
  1536. """ dtext = <printable ascii except \ [ ]> / obs-dtext
  1537. obs-dtext = obs-NO-WS-CTL / quoted-pair
  1538. We allow anything except the excluded characters, but but if we find any
  1539. ASCII other than the RFC defined printable ASCII an NonPrintableDefect is
  1540. added to the token's defects list. Quoted pairs are converted to their
  1541. unquoted values, so what is returned is a ptext token, in this case a
  1542. ValueTerminal. If there were quoted-printables, an ObsoleteHeaderDefect is
  1543. added to the returned token's defect list.
  1544. """
  1545. ptext, value, had_qp = _get_ptext_to_endchars(value, '[]')
  1546. ptext = ValueTerminal(ptext, 'ptext')
  1547. if had_qp:
  1548. ptext.defects.append(errors.ObsoleteHeaderDefect(
  1549. "quoted printable found in domain-literal"))
  1550. _validate_xtext(ptext)
  1551. return ptext, value
  1552. def _check_for_early_dl_end(value, domain_literal):
  1553. if value:
  1554. return False
  1555. domain_literal.append(errors.InvalidHeaderDefect(
  1556. "end of input inside domain-literal"))
  1557. domain_literal.append(ValueTerminal(']', 'domain-literal-end'))
  1558. return True
  1559. def get_domain_literal(value):
  1560. """ domain-literal = [CFWS] "[" *([FWS] dtext) [FWS] "]" [CFWS]
  1561. """
  1562. domain_literal = DomainLiteral()
  1563. if value[0] in CFWS_LEADER:
  1564. token, value = get_cfws(value)
  1565. domain_literal.append(token)
  1566. if not value:
  1567. raise errors.HeaderParseError("expected domain-literal")
  1568. if value[0] != '[':
  1569. raise errors.HeaderParseError("expected '[' at start of domain-literal "
  1570. "but found '{}'".format(value))
  1571. value = value[1:]
  1572. if _check_for_early_dl_end(value, domain_literal):
  1573. return domain_literal, value
  1574. domain_literal.append(ValueTerminal('[', 'domain-literal-start'))
  1575. if value[0] in WSP:
  1576. token, value = get_fws(value)
  1577. domain_literal.append(token)
  1578. token, value = get_dtext(value)
  1579. domain_literal.append(token)
  1580. if _check_for_early_dl_end(value, domain_literal):
  1581. return domain_literal, value
  1582. if value[0] in WSP:
  1583. token, value = get_fws(value)
  1584. domain_literal.append(token)
  1585. if _check_for_early_dl_end(value, domain_literal):
  1586. return domain_literal, value
  1587. if value[0] != ']':
  1588. raise errors.HeaderParseError("expected ']' at end of domain-literal "
  1589. "but found '{}'".format(value))
  1590. domain_literal.append(ValueTerminal(']', 'domain-literal-end'))
  1591. value = value[1:]
  1592. if value and value[0] in CFWS_LEADER:
  1593. token, value = get_cfws(value)
  1594. domain_literal.append(token)
  1595. return domain_literal, value
  1596. def get_domain(value):
  1597. """ domain = dot-atom / domain-literal / obs-domain
  1598. obs-domain = atom *("." atom))
  1599. """
  1600. domain = Domain()
  1601. leader = None
  1602. if value[0] in CFWS_LEADER:
  1603. leader, value = get_cfws(value)
  1604. if not value:
  1605. raise errors.HeaderParseError(
  1606. "expected domain but found '{}'".format(value))
  1607. if value[0] == '[':
  1608. token, value = get_domain_literal(value)
  1609. if leader is not None:
  1610. token[:0] = [leader]
  1611. domain.append(token)
  1612. return domain, value
  1613. try:
  1614. token, value = get_dot_atom(value)
  1615. except errors.HeaderParseError:
  1616. token, value = get_atom(value)
  1617. if leader is not None:
  1618. token[:0] = [leader]
  1619. domain.append(token)
  1620. if value and value[0] == '.':
  1621. domain.defects.append(errors.ObsoleteHeaderDefect(
  1622. "domain is not a dot-atom (contains CFWS)"))
  1623. if domain[0].token_type == 'dot-atom':
  1624. domain[:] = domain[0]
  1625. while value and value[0] == '.':
  1626. domain.append(DOT)
  1627. token, value = get_atom(value[1:])
  1628. domain.append(token)
  1629. return domain, value
  1630. def get_addr_spec(value):
  1631. """ addr-spec = local-part "@" domain
  1632. """
  1633. addr_spec = AddrSpec()
  1634. token, value = get_local_part(value)
  1635. addr_spec.append(token)
  1636. if not value or value[0] != '@':
  1637. addr_spec.defects.append(errors.InvalidHeaderDefect(
  1638. "add-spec local part with no domain"))
  1639. return addr_spec, value
  1640. addr_spec.append(ValueTerminal('@', 'address-at-symbol'))
  1641. token, value = get_domain(value[1:])
  1642. addr_spec.append(token)
  1643. return addr_spec, value
  1644. def get_obs_route(value):
  1645. """ obs-route = obs-domain-list ":"
  1646. obs-domain-list = *(CFWS / ",") "@" domain *("," [CFWS] ["@" domain])
  1647. Returns an obs-route token with the appropriate sub-tokens (that is,
  1648. there is no obs-domain-list in the parse tree).
  1649. """
  1650. obs_route = ObsRoute()
  1651. while value and (value[0]==',' or value[0] in CFWS_LEADER):
  1652. if value[0] in CFWS_LEADER:
  1653. token, value = get_cfws(value)
  1654. obs_route.append(token)
  1655. elif value[0] == ',':
  1656. obs_route.append(ListSeparator)
  1657. value = value[1:]
  1658. if not value or value[0] != '@':
  1659. raise errors.HeaderParseError(
  1660. "expected obs-route domain but found '{}'".format(value))
  1661. obs_route.append(RouteComponentMarker)
  1662. token, value = get_domain(value[1:])
  1663. obs_route.append(token)
  1664. while value and value[0]==',':
  1665. obs_route.append(ListSeparator)
  1666. value = value[1:]
  1667. if not value:
  1668. break
  1669. if value[0] in CFWS_LEADER:
  1670. token, value = get_cfws(value)
  1671. obs_route.append(token)
  1672. if value[0] == '@':
  1673. obs_route.append(RouteComponentMarker)
  1674. token, value = get_domain(value[1:])
  1675. obs_route.append(token)
  1676. if not value:
  1677. raise errors.HeaderParseError("end of header while parsing obs-route")
  1678. if value[0] != ':':
  1679. raise errors.HeaderParseError( "expected ':' marking end of "
  1680. "obs-route but found '{}'".format(value))
  1681. obs_route.append(ValueTerminal(':', 'end-of-obs-route-marker'))
  1682. return obs_route, value[1:]
  1683. def get_angle_addr(value):
  1684. """ angle-addr = [CFWS] "<" addr-spec ">" [CFWS] / obs-angle-addr
  1685. obs-angle-addr = [CFWS] "<" obs-route addr-spec ">" [CFWS]
  1686. """
  1687. angle_addr = AngleAddr()
  1688. if value[0] in CFWS_LEADER:
  1689. token, value = get_cfws(value)
  1690. angle_addr.append(token)
  1691. if not value or value[0] != '<':
  1692. raise errors.HeaderParseError(
  1693. "expected angle-addr but found '{}'".format(value))
  1694. angle_addr.append(ValueTerminal('<', 'angle-addr-start'))
  1695. value = value[1:]
  1696. # Although it is not legal per RFC5322, SMTP uses '<>' in certain
  1697. # circumstances.
  1698. if value[0] == '>':
  1699. angle_addr.append(ValueTerminal('>', 'angle-addr-end'))
  1700. angle_addr.defects.append(errors.InvalidHeaderDefect(
  1701. "null addr-spec in angle-addr"))
  1702. value = value[1:]
  1703. return angle_addr, value
  1704. try:
  1705. token, value = get_addr_spec(value)
  1706. except errors.HeaderParseError:
  1707. try:
  1708. token, value = get_obs_route(value)
  1709. angle_addr.defects.append(errors.ObsoleteHeaderDefect(
  1710. "obsolete route specification in angle-addr"))
  1711. except errors.HeaderParseError:
  1712. raise errors.HeaderParseError(
  1713. "expected addr-spec or obs-route but found '{}'".format(value))
  1714. angle_addr.append(token)
  1715. token, value = get_addr_spec(value)
  1716. angle_addr.append(token)
  1717. if value and value[0] == '>':
  1718. value = value[1:]
  1719. else:
  1720. angle_addr.defects.append(errors.InvalidHeaderDefect(
  1721. "missing trailing '>' on angle-addr"))
  1722. angle_addr.append(ValueTerminal('>', 'angle-addr-end'))
  1723. if value and value[0] in CFWS_LEADER:
  1724. token, value = get_cfws(value)
  1725. angle_addr.append(token)
  1726. return angle_addr, value
  1727. def get_display_name(value):
  1728. """ display-name = phrase
  1729. Because this is simply a name-rule, we don't return a display-name
  1730. token containing a phrase, but rather a display-name token with
  1731. the content of the phrase.
  1732. """
  1733. display_name = DisplayName()
  1734. token, value = get_phrase(value)
  1735. display_name.extend(token[:])
  1736. display_name.defects = token.defects[:]
  1737. return display_name, value
  1738. def get_name_addr(value):
  1739. """ name-addr = [display-name] angle-addr
  1740. """
  1741. name_addr = NameAddr()
  1742. # Both the optional display name and the angle-addr can start with cfws.
  1743. leader = None
  1744. if value[0] in CFWS_LEADER:
  1745. leader, value = get_cfws(value)
  1746. if not value:
  1747. raise errors.HeaderParseError(
  1748. "expected name-addr but found '{}'".format(leader))
  1749. if value[0] != '<':
  1750. if value[0] in PHRASE_ENDS:
  1751. raise errors.HeaderParseError(
  1752. "expected name-addr but found '{}'".format(value))
  1753. token, value = get_display_name(value)
  1754. if not value:
  1755. raise errors.HeaderParseError(
  1756. "expected name-addr but found '{}'".format(token))
  1757. if leader is not None:
  1758. token[0][:0] = [leader]
  1759. leader = None
  1760. name_addr.append(token)
  1761. token, value = get_angle_addr(value)
  1762. if leader is not None:
  1763. token[:0] = [leader]
  1764. name_addr.append(token)
  1765. return name_addr, value
  1766. def get_mailbox(value):
  1767. """ mailbox = name-addr / addr-spec
  1768. """
  1769. # The only way to figure out if we are dealing with a name-addr or an
  1770. # addr-spec is to try parsing each one.
  1771. mailbox = Mailbox()
  1772. try:
  1773. token, value = get_name_addr(value)
  1774. except errors.HeaderParseError:
  1775. try:
  1776. token, value = get_addr_spec(value)
  1777. except errors.HeaderParseError:
  1778. raise errors.HeaderParseError(
  1779. "expected mailbox but found '{}'".format(value))
  1780. if any(isinstance(x, errors.InvalidHeaderDefect)
  1781. for x in token.all_defects):
  1782. mailbox.token_type = 'invalid-mailbox'
  1783. mailbox.append(token)
  1784. return mailbox, value
  1785. def get_invalid_mailbox(value, endchars):
  1786. """ Read everything up to one of the chars in endchars.
  1787. This is outside the formal grammar. The InvalidMailbox TokenList that is
  1788. returned acts like a Mailbox, but the data attributes are None.
  1789. """
  1790. invalid_mailbox = InvalidMailbox()
  1791. while value and value[0] not in endchars:
  1792. if value[0] in PHRASE_ENDS:
  1793. invalid_mailbox.append(ValueTerminal(value[0],
  1794. 'misplaced-special'))
  1795. value = value[1:]
  1796. else:
  1797. token, value = get_phrase(value)
  1798. invalid_mailbox.append(token)
  1799. return invalid_mailbox, value
  1800. def get_mailbox_list(value):
  1801. """ mailbox-list = (mailbox *("," mailbox)) / obs-mbox-list
  1802. obs-mbox-list = *([CFWS] ",") mailbox *("," [mailbox / CFWS])
  1803. For this routine we go outside the formal grammar in order to improve error
  1804. handling. We recognize the end of the mailbox list only at the end of the
  1805. value or at a ';' (the group terminator). This is so that we can turn
  1806. invalid mailboxes into InvalidMailbox tokens and continue parsing any
  1807. remaining valid mailboxes. We also allow all mailbox entries to be null,
  1808. and this condition is handled appropriately at a higher level.
  1809. """
  1810. mailbox_list = MailboxList()
  1811. while value and value[0] != ';':
  1812. try:
  1813. token, value = get_mailbox(value)
  1814. mailbox_list.append(token)
  1815. except errors.HeaderParseError:
  1816. leader = None
  1817. if value[0] in CFWS_LEADER:
  1818. leader, value = get_cfws(value)
  1819. if not value or value[0] in ',;':
  1820. mailbox_list.append(leader)
  1821. mailbox_list.defects.append(errors.ObsoleteHeaderDefect(
  1822. "empty element in mailbox-list"))
  1823. else:
  1824. token, value = get_invalid_mailbox(value, ',;')
  1825. if leader is not None:
  1826. token[:0] = [leader]
  1827. mailbox_list.append(token)
  1828. mailbox_list.defects.append(errors.InvalidHeaderDefect(
  1829. "invalid mailbox in mailbox-list"))
  1830. elif value[0] == ',':
  1831. mailbox_list.defects.append(errors.ObsoleteHeaderDefect(
  1832. "empty element in mailbox-list"))
  1833. else:
  1834. token, value = get_invalid_mailbox(value, ',;')
  1835. if leader is not None:
  1836. token[:0] = [leader]
  1837. mailbox_list.append(token)
  1838. mailbox_list.defects.append(errors.InvalidHeaderDefect(
  1839. "invalid mailbox in mailbox-list"))
  1840. if value and value[0] not in ',;':
  1841. # Crap after mailbox; treat it as an invalid mailbox.
  1842. # The mailbox info will still be available.
  1843. mailbox = mailbox_list[-1]
  1844. mailbox.token_type = 'invalid-mailbox'
  1845. token, value = get_invalid_mailbox(value, ',;')
  1846. mailbox.extend(token)
  1847. mailbox_list.defects.append(errors.InvalidHeaderDefect(
  1848. "invalid mailbox in mailbox-list"))
  1849. if value and value[0] == ',':
  1850. mailbox_list.append(ListSeparator)
  1851. value = value[1:]
  1852. return mailbox_list, value
  1853. def get_group_list(value):
  1854. """ group-list = mailbox-list / CFWS / obs-group-list
  1855. obs-group-list = 1*([CFWS] ",") [CFWS]
  1856. """
  1857. group_list = GroupList()
  1858. if not value:
  1859. group_list.defects.append(errors.InvalidHeaderDefect(
  1860. "end of header before group-list"))
  1861. return group_list, value
  1862. leader = None
  1863. if value and value[0] in CFWS_LEADER:
  1864. leader, value = get_cfws(value)
  1865. if not value:
  1866. # This should never happen in email parsing, since CFWS-only is a
  1867. # legal alternative to group-list in a group, which is the only
  1868. # place group-list appears.
  1869. group_list.defects.append(errors.InvalidHeaderDefect(
  1870. "end of header in group-list"))
  1871. group_list.append(leader)
  1872. return group_list, value
  1873. if value[0] == ';':
  1874. group_list.append(leader)
  1875. return group_list, value
  1876. token, value = get_mailbox_list(value)
  1877. if len(token.all_mailboxes)==0:
  1878. if leader is not None:
  1879. group_list.append(leader)
  1880. group_list.extend(token)
  1881. group_list.defects.append(errors.ObsoleteHeaderDefect(
  1882. "group-list with empty entries"))
  1883. return group_list, value
  1884. if leader is not None:
  1885. token[:0] = [leader]
  1886. group_list.append(token)
  1887. return group_list, value
  1888. def get_group(value):
  1889. """ group = display-name ":" [group-list] ";" [CFWS]
  1890. """
  1891. group = Group()
  1892. token, value = get_display_name(value)
  1893. if not value or value[0] != ':':
  1894. raise errors.HeaderParseError("expected ':' at end of group "
  1895. "display name but found '{}'".format(value))
  1896. group.append(token)
  1897. group.append(ValueTerminal(':', 'group-display-name-terminator'))
  1898. value = value[1:]
  1899. if value and value[0] == ';':
  1900. group.append(ValueTerminal(';', 'group-terminator'))
  1901. return group, value[1:]
  1902. token, value = get_group_list(value)
  1903. group.append(token)
  1904. if not value:
  1905. group.defects.append(errors.InvalidHeaderDefect(
  1906. "end of header in group"))
  1907. if value[0] != ';':
  1908. raise errors.HeaderParseError(
  1909. "expected ';' at end of group but found {}".format(value))
  1910. group.append(ValueTerminal(';', 'group-terminator'))
  1911. value = value[1:]
  1912. if value and value[0] in CFWS_LEADER:
  1913. token, value = get_cfws(value)
  1914. group.append(token)
  1915. return group, value
  1916. def get_address(value):
  1917. """ address = mailbox / group
  1918. Note that counter-intuitively, an address can be either a single address or
  1919. a list of addresses (a group). This is why the returned Address object has
  1920. a 'mailboxes' attribute which treats a single address as a list of length
  1921. one. When you need to differentiate between to two cases, extract the single
  1922. element, which is either a mailbox or a group token.
  1923. """
  1924. # The formal grammar isn't very helpful when parsing an address. mailbox
  1925. # and group, especially when allowing for obsolete forms, start off very
  1926. # similarly. It is only when you reach one of @, <, or : that you know
  1927. # what you've got. So, we try each one in turn, starting with the more
  1928. # likely of the two. We could perhaps make this more efficient by looking
  1929. # for a phrase and then branching based on the next character, but that
  1930. # would be a premature optimization.
  1931. address = Address()
  1932. try:
  1933. token, value = get_group(value)
  1934. except errors.HeaderParseError:
  1935. try:
  1936. token, value = get_mailbox(value)
  1937. except errors.HeaderParseError:
  1938. raise errors.HeaderParseError(
  1939. "expected address but found '{}'".format(value))
  1940. address.append(token)
  1941. return address, value
  1942. def get_address_list(value):
  1943. """ address_list = (address *("," address)) / obs-addr-list
  1944. obs-addr-list = *([CFWS] ",") address *("," [address / CFWS])
  1945. We depart from the formal grammar here by continuing to parse until the end
  1946. of the input, assuming the input to be entirely composed of an
  1947. address-list. This is always true in email parsing, and allows us
  1948. to skip invalid addresses to parse additional valid ones.
  1949. """
  1950. address_list = AddressList()
  1951. while value:
  1952. try:
  1953. token, value = get_address(value)
  1954. address_list.append(token)
  1955. except errors.HeaderParseError as err:
  1956. leader = None
  1957. if value[0] in CFWS_LEADER:
  1958. leader, value = get_cfws(value)
  1959. if not value or value[0] == ',':
  1960. address_list.append(leader)
  1961. address_list.defects.append(errors.ObsoleteHeaderDefect(
  1962. "address-list entry with no content"))
  1963. else:
  1964. token, value = get_invalid_mailbox(value, ',')
  1965. if leader is not None:
  1966. token[:0] = [leader]
  1967. address_list.append(Address([token]))
  1968. address_list.defects.append(errors.InvalidHeaderDefect(
  1969. "invalid address in address-list"))
  1970. elif value[0] == ',':
  1971. address_list.defects.append(errors.ObsoleteHeaderDefect(
  1972. "empty element in address-list"))
  1973. else:
  1974. token, value = get_invalid_mailbox(value, ',')
  1975. if leader is not None:
  1976. token[:0] = [leader]
  1977. address_list.append(Address([token]))
  1978. address_list.defects.append(errors.InvalidHeaderDefect(
  1979. "invalid address in address-list"))
  1980. if value and value[0] != ',':
  1981. # Crap after address; treat it as an invalid mailbox.
  1982. # The mailbox info will still be available.
  1983. mailbox = address_list[-1][0]
  1984. mailbox.token_type = 'invalid-mailbox'
  1985. token, value = get_invalid_mailbox(value, ',')
  1986. mailbox.extend(token)
  1987. address_list.defects.append(errors.InvalidHeaderDefect(
  1988. "invalid address in address-list"))
  1989. if value: # Must be a , at this point.
  1990. address_list.append(ValueTerminal(',', 'list-separator'))
  1991. value = value[1:]
  1992. return address_list, value
  1993. #
  1994. # XXX: As I begin to add additional header parsers, I'm realizing we probably
  1995. # have two level of parser routines: the get_XXX methods that get a token in
  1996. # the grammar, and parse_XXX methods that parse an entire field value. So
  1997. # get_address_list above should really be a parse_ method, as probably should
  1998. # be get_unstructured.
  1999. #
  2000. def parse_mime_version(value):
  2001. """ mime-version = [CFWS] 1*digit [CFWS] "." [CFWS] 1*digit [CFWS]
  2002. """
  2003. # The [CFWS] is implicit in the RFC 2045 BNF.
  2004. # XXX: This routine is a bit verbose, should factor out a get_int method.
  2005. mime_version = MIMEVersion()
  2006. if not value:
  2007. mime_version.defects.append(errors.HeaderMissingRequiredValue(
  2008. "Missing MIME version number (eg: 1.0)"))
  2009. return mime_version
  2010. if value[0] in CFWS_LEADER:
  2011. token, value = get_cfws(value)
  2012. mime_version.append(token)
  2013. if not value:
  2014. mime_version.defects.append(errors.HeaderMissingRequiredValue(
  2015. "Expected MIME version number but found only CFWS"))
  2016. digits = ''
  2017. while value and value[0] != '.' and value[0] not in CFWS_LEADER:
  2018. digits += value[0]
  2019. value = value[1:]
  2020. if not digits.isdigit():
  2021. mime_version.defects.append(errors.InvalidHeaderDefect(
  2022. "Expected MIME major version number but found {!r}".format(digits)))
  2023. mime_version.append(ValueTerminal(digits, 'xtext'))
  2024. else:
  2025. mime_version.major = int(digits)
  2026. mime_version.append(ValueTerminal(digits, 'digits'))
  2027. if value and value[0] in CFWS_LEADER:
  2028. token, value = get_cfws(value)
  2029. mime_version.append(token)
  2030. if not value or value[0] != '.':
  2031. if mime_version.major is not None:
  2032. mime_version.defects.append(errors.InvalidHeaderDefect(
  2033. "Incomplete MIME version; found only major number"))
  2034. if value:
  2035. mime_version.append(ValueTerminal(value, 'xtext'))
  2036. return mime_version
  2037. mime_version.append(ValueTerminal('.', 'version-separator'))
  2038. value = value[1:]
  2039. if value and value[0] in CFWS_LEADER:
  2040. token, value = get_cfws(value)
  2041. mime_version.append(token)
  2042. if not value:
  2043. if mime_version.major is not None:
  2044. mime_version.defects.append(errors.InvalidHeaderDefect(
  2045. "Incomplete MIME version; found only major number"))
  2046. return mime_version
  2047. digits = ''
  2048. while value and value[0] not in CFWS_LEADER:
  2049. digits += value[0]
  2050. value = value[1:]
  2051. if not digits.isdigit():
  2052. mime_version.defects.append(errors.InvalidHeaderDefect(
  2053. "Expected MIME minor version number but found {!r}".format(digits)))
  2054. mime_version.append(ValueTerminal(digits, 'xtext'))
  2055. else:
  2056. mime_version.minor = int(digits)
  2057. mime_version.append(ValueTerminal(digits, 'digits'))
  2058. if value and value[0] in CFWS_LEADER:
  2059. token, value = get_cfws(value)
  2060. mime_version.append(token)
  2061. if value:
  2062. mime_version.defects.append(errors.InvalidHeaderDefect(
  2063. "Excess non-CFWS text after MIME version"))
  2064. mime_version.append(ValueTerminal(value, 'xtext'))
  2065. return mime_version
  2066. def get_invalid_parameter(value):
  2067. """ Read everything up to the next ';'.
  2068. This is outside the formal grammar. The InvalidParameter TokenList that is
  2069. returned acts like a Parameter, but the data attributes are None.
  2070. """
  2071. invalid_parameter = InvalidParameter()
  2072. while value and value[0] != ';':
  2073. if value[0] in PHRASE_ENDS:
  2074. invalid_parameter.append(ValueTerminal(value[0],
  2075. 'misplaced-special'))
  2076. value = value[1:]
  2077. else:
  2078. token, value = get_phrase(value)
  2079. invalid_parameter.append(token)
  2080. return invalid_parameter, value
  2081. def get_ttext(value):
  2082. """ttext = <matches _ttext_matcher>
  2083. We allow any non-TOKEN_ENDS in ttext, but add defects to the token's
  2084. defects list if we find non-ttext characters. We also register defects for
  2085. *any* non-printables even though the RFC doesn't exclude all of them,
  2086. because we follow the spirit of RFC 5322.
  2087. """
  2088. m = _non_token_end_matcher(value)
  2089. if not m:
  2090. raise errors.HeaderParseError(
  2091. "expected ttext but found '{}'".format(value))
  2092. ttext = m.group()
  2093. value = value[len(ttext):]
  2094. ttext = ValueTerminal(ttext, 'ttext')
  2095. _validate_xtext(ttext)
  2096. return ttext, value
  2097. def get_token(value):
  2098. """token = [CFWS] 1*ttext [CFWS]
  2099. The RFC equivalent of ttext is any US-ASCII chars except space, ctls, or
  2100. tspecials. We also exclude tabs even though the RFC doesn't.
  2101. The RFC implies the CFWS but is not explicit about it in the BNF.
  2102. """
  2103. mtoken = Token()
  2104. if value and value[0] in CFWS_LEADER:
  2105. token, value = get_cfws(value)
  2106. mtoken.append(token)
  2107. if value and value[0] in TOKEN_ENDS:
  2108. raise errors.HeaderParseError(
  2109. "expected token but found '{}'".format(value))
  2110. token, value = get_ttext(value)
  2111. mtoken.append(token)
  2112. if value and value[0] in CFWS_LEADER:
  2113. token, value = get_cfws(value)
  2114. mtoken.append(token)
  2115. return mtoken, value
  2116. def get_attrtext(value):
  2117. """attrtext = 1*(any non-ATTRIBUTE_ENDS character)
  2118. We allow any non-ATTRIBUTE_ENDS in attrtext, but add defects to the
  2119. token's defects list if we find non-attrtext characters. We also register
  2120. defects for *any* non-printables even though the RFC doesn't exclude all of
  2121. them, because we follow the spirit of RFC 5322.
  2122. """
  2123. m = _non_attribute_end_matcher(value)
  2124. if not m:
  2125. raise errors.HeaderParseError(
  2126. "expected attrtext but found {!r}".format(value))
  2127. attrtext = m.group()
  2128. value = value[len(attrtext):]
  2129. attrtext = ValueTerminal(attrtext, 'attrtext')
  2130. _validate_xtext(attrtext)
  2131. return attrtext, value
  2132. def get_attribute(value):
  2133. """ [CFWS] 1*attrtext [CFWS]
  2134. This version of the BNF makes the CFWS explicit, and as usual we use a
  2135. value terminal for the actual run of characters. The RFC equivalent of
  2136. attrtext is the token characters, with the subtraction of '*', "'", and '%'.
  2137. We include tab in the excluded set just as we do for token.
  2138. """
  2139. attribute = Attribute()
  2140. if value and value[0] in CFWS_LEADER:
  2141. token, value = get_cfws(value)
  2142. attribute.append(token)
  2143. if value and value[0] in ATTRIBUTE_ENDS:
  2144. raise errors.HeaderParseError(
  2145. "expected token but found '{}'".format(value))
  2146. token, value = get_attrtext(value)
  2147. attribute.append(token)
  2148. if value and value[0] in CFWS_LEADER:
  2149. token, value = get_cfws(value)
  2150. attribute.append(token)
  2151. return attribute, value
  2152. def get_extended_attrtext(value):
  2153. """attrtext = 1*(any non-ATTRIBUTE_ENDS character plus '%')
  2154. This is a special parsing routine so that we get a value that
  2155. includes % escapes as a single string (which we decode as a single
  2156. string later).
  2157. """
  2158. m = _non_extended_attribute_end_matcher(value)
  2159. if not m:
  2160. raise errors.HeaderParseError(
  2161. "expected extended attrtext but found {!r}".format(value))
  2162. attrtext = m.group()
  2163. value = value[len(attrtext):]
  2164. attrtext = ValueTerminal(attrtext, 'extended-attrtext')
  2165. _validate_xtext(attrtext)
  2166. return attrtext, value
  2167. def get_extended_attribute(value):
  2168. """ [CFWS] 1*extended_attrtext [CFWS]
  2169. This is like the non-extended version except we allow % characters, so that
  2170. we can pick up an encoded value as a single string.
  2171. """
  2172. # XXX: should we have an ExtendedAttribute TokenList?
  2173. attribute = Attribute()
  2174. if value and value[0] in CFWS_LEADER:
  2175. token, value = get_cfws(value)
  2176. attribute.append(token)
  2177. if value and value[0] in EXTENDED_ATTRIBUTE_ENDS:
  2178. raise errors.HeaderParseError(
  2179. "expected token but found '{}'".format(value))
  2180. token, value = get_extended_attrtext(value)
  2181. attribute.append(token)
  2182. if value and value[0] in CFWS_LEADER:
  2183. token, value = get_cfws(value)
  2184. attribute.append(token)
  2185. return attribute, value
  2186. def get_section(value):
  2187. """ '*' digits
  2188. The formal BNF is more complicated because leading 0s are not allowed. We
  2189. check for that and add a defect. We also assume no CFWS is allowed between
  2190. the '*' and the digits, though the RFC is not crystal clear on that.
  2191. The caller should already have dealt with leading CFWS.
  2192. """
  2193. section = Section()
  2194. if not value or value[0] != '*':
  2195. raise errors.HeaderParseError("Expected section but found {}".format(
  2196. value))
  2197. section.append(ValueTerminal('*', 'section-marker'))
  2198. value = value[1:]
  2199. if not value or not value[0].isdigit():
  2200. raise errors.HeaderParseError("Expected section number but "
  2201. "found {}".format(value))
  2202. digits = ''
  2203. while value and value[0].isdigit():
  2204. digits += value[0]
  2205. value = value[1:]
  2206. if digits[0] == '0' and digits != '0':
  2207. section.defects.append(errors.InvalidHeaderError("section number"
  2208. "has an invalid leading 0"))
  2209. section.number = int(digits)
  2210. section.append(ValueTerminal(digits, 'digits'))
  2211. return section, value
  2212. def get_value(value):
  2213. """ quoted-string / attribute
  2214. """
  2215. v = Value()
  2216. if not value:
  2217. raise errors.HeaderParseError("Expected value but found end of string")
  2218. leader = None
  2219. if value[0] in CFWS_LEADER:
  2220. leader, value = get_cfws(value)
  2221. if not value:
  2222. raise errors.HeaderParseError("Expected value but found "
  2223. "only {}".format(leader))
  2224. if value[0] == '"':
  2225. token, value = get_quoted_string(value)
  2226. else:
  2227. token, value = get_extended_attribute(value)
  2228. if leader is not None:
  2229. token[:0] = [leader]
  2230. v.append(token)
  2231. return v, value
  2232. def get_parameter(value):
  2233. """ attribute [section] ["*"] [CFWS] "=" value
  2234. The CFWS is implied by the RFC but not made explicit in the BNF. This
  2235. simplified form of the BNF from the RFC is made to conform with the RFC BNF
  2236. through some extra checks. We do it this way because it makes both error
  2237. recovery and working with the resulting parse tree easier.
  2238. """
  2239. # It is possible CFWS would also be implicitly allowed between the section
  2240. # and the 'extended-attribute' marker (the '*') , but we've never seen that
  2241. # in the wild and we will therefore ignore the possibility.
  2242. param = Parameter()
  2243. token, value = get_attribute(value)
  2244. param.append(token)
  2245. if not value or value[0] == ';':
  2246. param.defects.append(errors.InvalidHeaderDefect("Parameter contains "
  2247. "name ({}) but no value".format(token)))
  2248. return param, value
  2249. if value[0] == '*':
  2250. try:
  2251. token, value = get_section(value)
  2252. param.sectioned = True
  2253. param.append(token)
  2254. except errors.HeaderParseError:
  2255. pass
  2256. if not value:
  2257. raise errors.HeaderParseError("Incomplete parameter")
  2258. if value[0] == '*':
  2259. param.append(ValueTerminal('*', 'extended-parameter-marker'))
  2260. value = value[1:]
  2261. param.extended = True
  2262. if value[0] != '=':
  2263. raise errors.HeaderParseError("Parameter not followed by '='")
  2264. param.append(ValueTerminal('=', 'parameter-separator'))
  2265. value = value[1:]
  2266. leader = None
  2267. if value and value[0] in CFWS_LEADER:
  2268. token, value = get_cfws(value)
  2269. param.append(token)
  2270. remainder = None
  2271. appendto = param
  2272. if param.extended and value and value[0] == '"':
  2273. # Now for some serious hackery to handle the common invalid case of
  2274. # double quotes around an extended value. We also accept (with defect)
  2275. # a value marked as encoded that isn't really.
  2276. qstring, remainder = get_quoted_string(value)
  2277. inner_value = qstring.stripped_value
  2278. semi_valid = False
  2279. if param.section_number == 0:
  2280. if inner_value and inner_value[0] == "'":
  2281. semi_valid = True
  2282. else:
  2283. token, rest = get_attrtext(inner_value)
  2284. if rest and rest[0] == "'":
  2285. semi_valid = True
  2286. else:
  2287. try:
  2288. token, rest = get_extended_attrtext(inner_value)
  2289. except:
  2290. pass
  2291. else:
  2292. if not rest:
  2293. semi_valid = True
  2294. if semi_valid:
  2295. param.defects.append(errors.InvalidHeaderDefect(
  2296. "Quoted string value for extended parameter is invalid"))
  2297. param.append(qstring)
  2298. for t in qstring:
  2299. if t.token_type == 'bare-quoted-string':
  2300. t[:] = []
  2301. appendto = t
  2302. break
  2303. value = inner_value
  2304. else:
  2305. remainder = None
  2306. param.defects.append(errors.InvalidHeaderDefect(
  2307. "Parameter marked as extended but appears to have a "
  2308. "quoted string value that is non-encoded"))
  2309. if value and value[0] == "'":
  2310. token = None
  2311. else:
  2312. token, value = get_value(value)
  2313. if not param.extended or param.section_number > 0:
  2314. if not value or value[0] != "'":
  2315. appendto.append(token)
  2316. if remainder is not None:
  2317. assert not value, value
  2318. value = remainder
  2319. return param, value
  2320. param.defects.append(errors.InvalidHeaderDefect(
  2321. "Apparent initial-extended-value but attribute "
  2322. "was not marked as extended or was not initial section"))
  2323. if not value:
  2324. # Assume the charset/lang is missing and the token is the value.
  2325. param.defects.append(errors.InvalidHeaderDefect(
  2326. "Missing required charset/lang delimiters"))
  2327. appendto.append(token)
  2328. if remainder is None:
  2329. return param, value
  2330. else:
  2331. if token is not None:
  2332. for t in token:
  2333. if t.token_type == 'extended-attrtext':
  2334. break
  2335. t.token_type == 'attrtext'
  2336. appendto.append(t)
  2337. param.charset = t.value
  2338. if value[0] != "'":
  2339. raise errors.HeaderParseError("Expected RFC2231 char/lang encoding "
  2340. "delimiter, but found {!r}".format(value))
  2341. appendto.append(ValueTerminal("'", 'RFC2231 delimiter'))
  2342. value = value[1:]
  2343. if value and value[0] != "'":
  2344. token, value = get_attrtext(value)
  2345. appendto.append(token)
  2346. param.lang = token.value
  2347. if not value or value[0] != "'":
  2348. raise errors.HeaderParseError("Expected RFC2231 char/lang encoding "
  2349. "delimiter, but found {}".format(value))
  2350. appendto.append(ValueTerminal("'", 'RFC2231 delimiter'))
  2351. value = value[1:]
  2352. if remainder is not None:
  2353. # Treat the rest of value as bare quoted string content.
  2354. v = Value()
  2355. while value:
  2356. if value[0] in WSP:
  2357. token, value = get_fws(value)
  2358. else:
  2359. token, value = get_qcontent(value)
  2360. v.append(token)
  2361. token = v
  2362. else:
  2363. token, value = get_value(value)
  2364. appendto.append(token)
  2365. if remainder is not None:
  2366. assert not value, value
  2367. value = remainder
  2368. return param, value
  2369. def parse_mime_parameters(value):
  2370. """ parameter *( ";" parameter )
  2371. That BNF is meant to indicate this routine should only be called after
  2372. finding and handling the leading ';'. There is no corresponding rule in
  2373. the formal RFC grammar, but it is more convenient for us for the set of
  2374. parameters to be treated as its own TokenList.
  2375. This is 'parse' routine because it consumes the reminaing value, but it
  2376. would never be called to parse a full header. Instead it is called to
  2377. parse everything after the non-parameter value of a specific MIME header.
  2378. """
  2379. mime_parameters = MimeParameters()
  2380. while value:
  2381. try:
  2382. token, value = get_parameter(value)
  2383. mime_parameters.append(token)
  2384. except errors.HeaderParseError as err:
  2385. leader = None
  2386. if value[0] in CFWS_LEADER:
  2387. leader, value = get_cfws(value)
  2388. if not value:
  2389. mime_parameters.append(leader)
  2390. return mime_parameters
  2391. if value[0] == ';':
  2392. if leader is not None:
  2393. mime_parameters.append(leader)
  2394. mime_parameters.defects.append(errors.InvalidHeaderDefect(
  2395. "parameter entry with no content"))
  2396. else:
  2397. token, value = get_invalid_parameter(value)
  2398. if leader:
  2399. token[:0] = [leader]
  2400. mime_parameters.append(token)
  2401. mime_parameters.defects.append(errors.InvalidHeaderDefect(
  2402. "invalid parameter {!r}".format(token)))
  2403. if value and value[0] != ';':
  2404. # Junk after the otherwise valid parameter. Mark it as
  2405. # invalid, but it will have a value.
  2406. param = mime_parameters[-1]
  2407. param.token_type = 'invalid-parameter'
  2408. token, value = get_invalid_parameter(value)
  2409. param.extend(token)
  2410. mime_parameters.defects.append(errors.InvalidHeaderDefect(
  2411. "parameter with invalid trailing text {!r}".format(token)))
  2412. if value:
  2413. # Must be a ';' at this point.
  2414. mime_parameters.append(ValueTerminal(';', 'parameter-separator'))
  2415. value = value[1:]
  2416. return mime_parameters
  2417. def _find_mime_parameters(tokenlist, value):
  2418. """Do our best to find the parameters in an invalid MIME header
  2419. """
  2420. while value and value[0] != ';':
  2421. if value[0] in PHRASE_ENDS:
  2422. tokenlist.append(ValueTerminal(value[0], 'misplaced-special'))
  2423. value = value[1:]
  2424. else:
  2425. token, value = get_phrase(value)
  2426. tokenlist.append(token)
  2427. if not value:
  2428. return
  2429. tokenlist.append(ValueTerminal(';', 'parameter-separator'))
  2430. tokenlist.append(parse_mime_parameters(value[1:]))
  2431. def parse_content_type_header(value):
  2432. """ maintype "/" subtype *( ";" parameter )
  2433. The maintype and substype are tokens. Theoretically they could
  2434. be checked against the official IANA list + x-token, but we
  2435. don't do that.
  2436. """
  2437. ctype = ContentType()
  2438. recover = False
  2439. if not value:
  2440. ctype.defects.append(errors.HeaderMissingRequiredValue(
  2441. "Missing content type specification"))
  2442. return ctype
  2443. try:
  2444. token, value = get_token(value)
  2445. except errors.HeaderParseError:
  2446. ctype.defects.append(errors.InvalidHeaderDefect(
  2447. "Expected content maintype but found {!r}".format(value)))
  2448. _find_mime_parameters(ctype, value)
  2449. return ctype
  2450. ctype.append(token)
  2451. # XXX: If we really want to follow the formal grammer we should make
  2452. # mantype and subtype specialized TokenLists here. Probably not worth it.
  2453. if not value or value[0] != '/':
  2454. ctype.defects.append(errors.InvalidHeaderDefect(
  2455. "Invalid content type"))
  2456. if value:
  2457. _find_mime_parameters(ctype, value)
  2458. return ctype
  2459. ctype.maintype = token.value.strip().lower()
  2460. ctype.append(ValueTerminal('/', 'content-type-separator'))
  2461. value = value[1:]
  2462. try:
  2463. token, value = get_token(value)
  2464. except errors.HeaderParseError:
  2465. ctype.defects.append(errors.InvalidHeaderDefect(
  2466. "Expected content subtype but found {!r}".format(value)))
  2467. _find_mime_parameters(ctype, value)
  2468. return ctype
  2469. ctype.append(token)
  2470. ctype.subtype = token.value.strip().lower()
  2471. if not value:
  2472. return ctype
  2473. if value[0] != ';':
  2474. ctype.defects.append(errors.InvalidHeaderDefect(
  2475. "Only parameters are valid after content type, but "
  2476. "found {!r}".format(value)))
  2477. # The RFC requires that a syntactically invalid content-type be treated
  2478. # as text/plain. Perhaps we should postel this, but we should probably
  2479. # only do that if we were checking the subtype value against IANA.
  2480. del ctype.maintype, ctype.subtype
  2481. _find_mime_parameters(ctype, value)
  2482. return ctype
  2483. ctype.append(ValueTerminal(';', 'parameter-separator'))
  2484. ctype.append(parse_mime_parameters(value[1:]))
  2485. return ctype
  2486. def parse_content_disposition_header(value):
  2487. """ disposition-type *( ";" parameter )
  2488. """
  2489. disp_header = ContentDisposition()
  2490. if not value:
  2491. disp_header.defects.append(errors.HeaderMissingRequiredValue(
  2492. "Missing content disposition"))
  2493. return disp_header
  2494. try:
  2495. token, value = get_token(value)
  2496. except errors.HeaderParseError:
  2497. ctype.defects.append(errors.InvalidHeaderDefect(
  2498. "Expected content disposition but found {!r}".format(value)))
  2499. _find_mime_parameters(disp_header, value)
  2500. return disp_header
  2501. disp_header.append(token)
  2502. disp_header.content_disposition = token.value.strip().lower()
  2503. if not value:
  2504. return disp_header
  2505. if value[0] != ';':
  2506. disp_header.defects.append(errors.InvalidHeaderDefect(
  2507. "Only parameters are valid after content disposition, but "
  2508. "found {!r}".format(value)))
  2509. _find_mime_parameters(disp_header, value)
  2510. return disp_header
  2511. disp_header.append(ValueTerminal(';', 'parameter-separator'))
  2512. disp_header.append(parse_mime_parameters(value[1:]))
  2513. return disp_header
  2514. def parse_content_transfer_encoding_header(value):
  2515. """ mechanism
  2516. """
  2517. # We should probably validate the values, since the list is fixed.
  2518. cte_header = ContentTransferEncoding()
  2519. if not value:
  2520. cte_header.defects.append(errors.HeaderMissingRequiredValue(
  2521. "Missing content transfer encoding"))
  2522. return cte_header
  2523. try:
  2524. token, value = get_token(value)
  2525. except errors.HeaderParseError:
  2526. ctype.defects.append(errors.InvalidHeaderDefect(
  2527. "Expected content trnasfer encoding but found {!r}".format(value)))
  2528. else:
  2529. cte_header.append(token)
  2530. cte_header.cte = token.value.strip().lower()
  2531. if not value:
  2532. return cte_header
  2533. while value:
  2534. cte_header.defects.append(errors.InvalidHeaderDefect(
  2535. "Extra text after content transfer encoding"))
  2536. if value[0] in PHRASE_ENDS:
  2537. cte_header.append(ValueTerminal(value[0], 'misplaced-special'))
  2538. value = value[1:]
  2539. else:
  2540. token, value = get_phrase(value)
  2541. cte_header.append(token)
  2542. return cte_header