浏览代码

also parse and archive sub-urls in generic_txt input

Nick Sweeting 5 年之前
父节点
当前提交
3fe7a9b70c
共有 1 个文件被更改,包括 12 次插入0 次删除
  1. 12 0
      archivebox/parsers/generic_txt.py

+ 12 - 0
archivebox/parsers/generic_txt.py

@@ -43,3 +43,15 @@ def parse_generic_txt_export(text_file: IO[str]) -> Iterable[Link]:
                 tags=None,
                 tags=None,
                 sources=[text_file.name],
                 sources=[text_file.name],
             )
             )
+
+            # look inside the URL for any sub-urls, e.g. for archive.org links
+            # https://web.archive.org/web/20200531203453/https://www.reddit.com/r/socialism/comments/gu24ke/nypd_officers_claim_they_are_protecting_the_rule/fsfq0sw/
+            # -> https://www.reddit.com/r/socialism/comments/gu24ke/nypd_officers_claim_they_are_protecting_the_rule/fsfq0sw/
+            for url in re.findall(URL_REGEX, line[1:]):
+                yield Link(
+                    url=htmldecode(url),
+                    timestamp=str(datetime.now().timestamp()),
+                    title=None,
+                    tags=None,
+                    sources=[text_file.name],
+                )