子域名收集
子域名收集是最简单的收集手法之一,有很多在线的工具可以直接套用,这里分享几个我经常用的。
开心的时候用用这个扫描器
为什么这么说,因为这是我写的:
import requests import threading from bs4 import BeautifulSoup import re import time url = input( 'url(如baidu.com): ' ) head={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0'} ip = 'http://site.ip138.com/{}'.format( url ) # domain_url = url.split('.') # domain_url = domain_url[1]+'.'+domain_url[2] domain_url = url domain = 'http://site.ip138.com/{}/domain.htm'.format( domain_url ) t = time.strftime("%Y-%m-%d"+'_', time.localtime()) html_file = open( url+'_'+t+'.html','w' ) html_file.write( ''' <head> <title>%s的扫描结果</title> <link rel="stylesheet" href="https://cdn.staticfile.org/twitter-bootstrap/3.3.7/css/bootstrap.min.css"> <script src="https://cdn.staticfile.org/jquery/2.1.1/jquery.min.js"></script> <script src="https://cdn.staticfile.org/twitter-bootstrap/3.3.7/js/bootstrap.min.js"></script> <style> pre{ margin: 0 0 0px; } </style> </head> <ul id="myTab" class="nav nav-tabs navbar-fixed-top navbar navbar-default"> <li class="active"> <a href="#ip" data-toggle="tab"> IP历史解析 </a> </li> <li><a href="#cms" data-toggle="tab">CMS识别</a></li> <li><a href="#domain" data-toggle="tab">子域名信息</a></li> </ul> <br> <br> <br> <br> <div id="myTabContent" class="tab-content"> '''%url ) class IP( threading.Thread ): def __init__(self, ip): threading.Thread.__init__(self) self.ip = ip def run(self): r = requests.get( self.ip,headers = head ) html = r.text bs = BeautifulSoup(html, "html.parser") html_file.write('<div class="tab-pane fade in active" id="ip">') for i in bs.find_all('p'): ipc = i.get_text() ip_html = '<pre>{}</pre>'.format( ipc ) html_file.write( ip_html ) html_file.write('</div>') class CMS( threading.Thread ): def __init__(self, cms): threading.Thread.__init__(self) self.cms = cms def run(self): cms = requests.post('http://whatweb.bugscaner.com/what/', data={'url': self.cms}, headers = head) text = cms.text Web_Frameworks = re.search('"Web Frameworks": "(.*?)"]', text) Programming_Languages = re.search('"Programming Languages":(.*?)"]', text) JavaScript_Frameworks = re.search('"JavaScript Frameworks": (.*?)"]', text) CMS = re.search('"CMS": (.*?)"]', text) Web_Server = re.search('"Web Servers": (.*?)"]', text) if CMS: CMS = CMS.group(1)+'"]' if Programming_Languages: Programming_Languages = Programming_Languages.group(1)+'"]' if JavaScript_Frameworks: JavaScript_Frameworks = JavaScript_Frameworks.group(1)+'"]' if Web_Frameworks: Web_Frameworks = Web_Frameworks.group(1)+'"]' if Web_Server: Web_Server = Web_Server.group(1)+'"]' html = ''' <div class="tab-pane fade" id="cms"> <div class="table-responsive"> <table class="table table-condensed"> <tr> <th>web框架</th> <th>脚本版本</th> <th>JavaScript框架</th> <th>CMS框架</th> <th>web服务器</th> </tr> <tr> <td>{0}</td> <td>{1}</td> <td>{2}</td> <td>{3}</td> <td>{4}</td> </tr> </table> </div> </div> '''.format(Web_Frameworks,Programming_Languages,JavaScript_Frameworks,CMS,Web_Server) html_file.write( html ) class DOMAIN( threading.Thread ): def __init__(self, domain): threading.Thread.__init__(self) self.domain = domain def run(self): r = requests.get( self.domain,headers = head ) html = r.text bs = BeautifulSoup(html, "html.parser") html_file.write('<div class="tab-pane fade in active" id="domain"') num = 0 for i in bs.find_all('p'): num += 1 html_file.write( '<br>' ) domainc = i.get_text() domain_html = '<pre>[{}]: {}</pre>'.format( num,domainc ) html_file.write( domain_html ) print( domain_html ) html_file.write('</div>') ip_cls = IP(ip) ip_html = ip_cls.run() cms_cls = CMS(url) cms_html = cms_cls.run() domain_cls = DOMAIN( domain ) domain_html = domain_cls.run() |
github开源的子域名扫描器
https://github.com/lijiejie/subDomainsBrutehttps://github.com/chuhades/dnsbrute
在线网站收集
1.https://d.chinacycc.com/(非常推荐)
然后不到30秒就出结果了:
2.http://z.zcjun.com/https://phpinfo.me/domain/
端口信息收集
扫描端口并且标记可以爆破的服务
nmap 目标 --script=ftp-brute,imap-brute,smtp-brute,pop3-brute,mongodb-brute,redis-brute,ms-sql-brute,rlogin-brute,rsync-brute,mysql-brute,pgsql-brute,oracle-sid-brute,oracle-brute,rtsp-url-brute,snmp-brute,svn-brute,telnet-brute,vnc-brute,xmpp-brute |
判断常见的漏洞并扫描端口
nmap 目标 --script=auth,vuln |
精确判断漏洞并扫描端口
nmap 目标 --script=dns-zone-transfer,ftp-anon,ftp-proftpd-backdoor,ftp-vsftpd-backdoor,ftp-vuln-cve2010-4221,http-backup-finder,http-cisco-anyconnect,http-iis-short-name-brute,http-put,http-php-version,http-shellshock,http-robots.txt,http-svn-enum,http-webdav-scan,iis-buffer-overflow,iax2-version,memcached-info,mongodb-info,msrpc-enum,ms-sql-info,mysql-info,nrpe-enum,pptp-version,redis-info,rpcinfo,samba-vuln-cve-2012-1182,smb-vuln-ms08-067,smb-vuln-ms17-010,snmp-info,sshv1,xmpp-info,tftp-enum,teamspeak2-version |
我喜欢这样做:
1.扫描子域名
提取出域名/ip:
然后把域名放到975.txt。
2.批量扫描端口和漏洞检测
nmap -iL 975.txt --script=auth,vuln,ftp-brute,imap-brute,smtp-brute,pop3-brute,mongodb-brute,redis-brute,ms-sql-brute,rlogin-brute,rsync-brute,mysql-brute,pgsql-brute,oracle-sid-brute,oracle-brute,rtsp-url-brute,snmp-brute,svn-brute,telnet-brute,vnc-brute,xmpp-brute > scan.txt |
然后根据对应开放的端口进行针对性漏洞挖掘。
c段信息收集
c段的话我一般都是使用iis put这款工具来扫描,可以自定义扫描1-255的端口并且还有返回服务器banner信息。
自定义的端口
135,139,80,8080,15672,873,8983,7001,4848,6379,2381,8161,11211,5335,5336,7809,2181,9200,50070,50075,5984,2375,7809,16992,16993 |
这里只是演示下他跑起来的美。
目录信息收集
目录收集工具有很多,但是最看重的还是目录字典,之前我拿了很多工具的字典去重集合起来超级超级大,只不过是在之前电脑那里还原的时候忘记了备份、、、(说这句话主要是想让你们也可以这样子做,方便自己,然后发我一份,方便你我)
这里推荐一个工具:7kbstorm
https://github.com/7kbstorm/7kbscan-WebPathBrute
像403、404这种页面千万不要关闭,放目录里面扫就ok。
谷歌语法收集敏感文件
最常见的就是用搜索引擎~
site:ooxx.com filetype:xls |
首先试试百度:
$!@!~~WDwadawicnm |
试试必应:
上文内容不用于商业目的,如涉及知识产权问题,请权利人联系博为峰小编(021-64471599-8017),我们将立即处理。