如何下载整个网站的前端 [英] How to download entire front-end of a website

查看:103
本文介绍了如何下载整个网站的前端的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想一键下载整个网站的所有html,css和js文件。我试图右键单击并查看源代码,但是我必须复制粘贴每个页面并创建文件夹,以便它非常繁琐。是否有任何开源软件可以帮助我做到这一点,还是我必须自己编写代码?

解决方案

wget 和linux。

  wget -r -np -k http://yourtarget.com/even/path/down/if/你/需要/ it / 

-r是递归的
-np(不要跟随父目录的链接)
-k将下载的HTML或CSS中的链接指向本地档案

其他有用的选项:

<$ p $下载所有文件到当前目录
-e robots.off:忽略robots.txt文件,不要下载robots.txt文件
- png,jpg:只接受带有扩展名png或jpg
-m(镜像)的文件:-r --timestamping --level inf --no-remove-listing


I want to download all the html, css and js files of the entire website in one click. I tried right-click and view source code but then I have to copy paste each page and create folder myself so its very tedious. Is there any open source software that helps do that or do i have to code it myself?

解决方案

wget is your friend here and it works on windows, mac and linux.

wget -r -np -k http://yourtarget.com/even/path/down/if/you/need/it/

-r is recursive
-np (do not follow links to parent directories)
-k to make links in downloaded HTML or CSS point to local files

Other useful options:

-nd (no directories): download all files to the current directory
-e robots.off: ignore robots.txt files, do not download robots.txt files
-A png,jpg: accept only files with the extensions png or jpg
-m (mirror): -r --timestamping --level inf --no-remove-listing

这篇关于如何下载整个网站的前端的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆