这是一个流量分析题帮我找到flag flag格式为dart{}

作成日: 2026年3月23日

使用モデル GPT-5.4 Thinking by Chat01

質問

traffic_hunt.pcapng

这是一个流量分析题帮我找到flag
flag格式为dart{}

思考

bash -lc ls -lh /mnt/data && file /mnt/data/traffic_hunt.pcapng && which tshark || which tcpdump || which ngrep

bash -lc which tshark; which tcpdump; which capinfos; which strings; which grep

bash -lc python - <<'PY' import importlib.util as u mods=['scapy','pyshark','dpkt','pcapng'] for m in mods: print(m, u.find_spec(m)) PY

bash -lc command -v tcpdump || true; command -v editcap || true; command -v mergecap || true; command -v nghttp || true

bash -lc python -m pip install scapy -q

bash -lc python - <<'PY' import scapy print(scapy.version) PY

Inspecting packet counts and potential formats

I'm thinking of using scapy's rdpcap to analyze the packet capture. It could help with parsing pcapng files, but I’ll need to inspect the packet summary counts. Let me focus on getting a clearer view of those counts to pinpoint the necessary packet details.

bash -lc python - <<'PY' from scapy.all import PcapNgReader from collections import Counter from scapy.layers.inet import TCP, UDP, IP from scapy.layers.dns import DNS from scapy.layers.http import HTTPRequest, HTTPResponse path='/mnt/data/traffic_hunt.pcapng' count=0 proto=Counter(); tcp_ports=Counter(); udp_ports=Counter(); dnsq=[] for pkt in PcapNgReader(path): count+=1 if IP in pkt: if TCP in pkt: proto['TCP']+=1 tcp_ports[pkt[TCP].dport]+=1 tcp_ports[pkt[TCP].sport]+=1 elif UDP in pkt: proto['UDP']+=1 udp_ports[pkt[UDP].dport]+=1 udp_ports[pkt[UDP].sport]+=1 else: proto[pkt.name]+=1 print('packets', count) print('proto', proto.most_common(10)) print('tcp_ports', tcp_ports.most_common(20)) print('udp_ports', udp_ports.most_common(20)) PY

Handling Scapy import issues

It looks like importing Scapy is failing due to issues with route6. Maybe I should disable IPv6 or switch to using individual readers. I’m thinking of trying from scapy.utils import PcapNgReader, which could be simpler. I’ll check the minimal imports and see if this helps resolve the problem.

bash -lc python - <<'PY' from scapy.utils import PcapNgReader print(PcapNgReader) PY

bash -lc python - <<'PY' from scapy.utils import PcapNgReader path='/mnt/data/traffic_hunt.pcapng' reader=PcapNgReader(path) for i,p in enumerate(reader): print(i, type(p), p.summary()) if i>=9: break PY

Manually decoding packet data

I think I’ll need to manually decode the packet. My first step will be to inspect the hex of the first packet and see what we're dealing with. After that, I can figure out how best to proceed with the decoding process based on what I find.

bash -lc python - <<'PY' from scapy.utils import PcapNgReader from binascii import hexlify reader=PcapNgReader('/mnt/data/traffic_hunt.pcapng') for i,p in enumerate(reader): b=bytes(p) print('pkt',i,'len',len(b),hexlify(b[:64]).decode()) if i==2: break PY

Parsing Ethernet/IP/TCP manually

It looks like I'll need to parse Ethernet, IP, and TCP layers manually. Maybe I can use struct or Scapy's Ether import. Alternatively, I could try dpkt or Scapy without the inet module. Maybe installing dpkt could be a good option. I'll go ahead and use pip to install it.

bash -lc python -m pip install dpkt -q && python - <<'PY' import dpkt print(dpkt.version) PY

Using dpkt for analysis

I'll use dpkt's pcapng Reader to analyze the capture. I'll start by counting the different protocols and IPs, then follow the TCP streams. Additionally, I'll search through the strings to see if a flag format appears. This should give me a comprehensive view of the packet data.

bash -lc python - <<'PY' import dpkt, socket, re, os from collections import Counter,defaultdict path='/mnt/data/traffic_hunt.pcapng' with open(path,'rb') as f: pcap=dpkt.pcapng.Reader(f) count=0 ip_counter=Counter(); flows=Counter(); ports=Counter(); dns_queries=[]; http_hosts=Counter(); tls_snis=Counter(); printable=[] for ts, buf in pcap: count += 1 try: eth=dpkt.ethernet.Ethernet(buf) except Exception: continue if not isinstance(eth.data, dpkt.ip.IP): continue ip=eth.data src=socket.inet_ntoa(ip.src); dst=socket.inet_ntoa(ip.dst) ip_counter[src]+=1; ip_counter[dst]+=1 if isinstance(ip.data, dpkt.tcp.TCP): tcp=ip.data ports[tcp.sport]+=1; ports[tcp.dport]+=1 flows[(src,tcp.sport,dst,tcp.dport)] += 1 data=bytes(tcp.data) if data: printable.append(data) if tcp.dport==80 or tcp.sport==80: # try host extraction if data.startswith((b'GET ',b'POST ',b'HEAD ',b'PUT ',b'DELETE ',b'OPTIONS ')): m=re.search(br'\r\nHost: ([^\r\n]+)', data, re.I) if m: http_hosts[m.group(1).decode('latin1','ignore')] += 1 if tcp.dport==443 or tcp.sport==443: # quick SNI parse from TLS client hello # skip for now pass elif isinstance(ip.data, dpkt.udp.UDP): udp=ip.data ports[udp.sport]+=1; ports[udp.dport]+=1 if udp.dport==53 or udp.sport==53: try: dns=dpkt.dns.DNS(udp.data) if dns.qd: for q in dns.qd: dns_queries.append(q.name) except Exception: pass print('packets',count) print('top ips',ip_counter.most_common(10)) print('top ports',ports.most_common(20)) print('top flows',flows.most_common(20)) print('dns',Counter(dns_queries).most_common(20)) print('http hosts',http_hosts.most_common(20)) # grep likely flag strings from payloads blob=b'\n'.join(printable) for pat in [br'dart{[^}\r\n]{1,200}}', br'flag{[^}\r\n]{1,200}}', br'ctf{[^}\r\n]{1,200}}', br'[A-Za-z0-9_-/+=]{10,}']: ms=re.findall(pat, blob, re.I) if ms: print('pattern',pat, 'sample', ms[:10]) PY

Analyzing TCP 8080 for HTTP traffic

It looks like I need to analyze TCP traffic on port 8080, which is likely HTTP but could be custom. I'll start by extracting the HTTP requests and responses from the streams. Also, searching the payload for dart or base64 encoding could help. I'll reconstruct sessions and possibly use dpkt's HTTP parsers.

bash -lc python - <<'PY' import dpkt, socket, re, gzip, zlib from collections import defaultdict, Counter path='/mnt/data/traffic_hunt.pcapng' streams=defaultdict(lambda: {0:[],1:[]})

direction 0: client->server where dport=8080, 1: server->client

with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf) ip=eth.data if not isinstance(ip, dpkt.ip.IP): continue tcp=ip.data if not isinstance(tcp, dpkt.tcp.TCP): continue except Exception: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if tcp.dport==8080: key=(sip,tcp.sport,dip,8080) streams[key][0].append((tcp.seq, bytes(tcp.data))) elif tcp.sport==8080: key=(dip,tcp.dport,sip,8080) streams[key][1].append((tcp.seq, bytes(tcp.data)))

def reassemble(parts): # naive by seq, unique sequences out=[]; seen=set() for seq,data in sorted(parts, key=lambda x:x[0]): if not data or (seq,len(data)) in seen: continue seen.add((seq,len(data))) out.append(data) return b''.join(out)

Analyze first several streams

print('num streams',len(streams)) for i,(k,v) in enumerate(sorted(streams.items(), key=lambda kv: len(kv[1][0])+len(kv[1][1]), reverse=True)[:20]): c2s=reassemble(v[0]); s2c=reassemble(v[1]) print('\nSTREAM',i,k,'c2s',len(c2s),'s2c',len(s2c)) for label,data in [('C',c2s[:500]),('S',s2c[:500])]: txt=data.decode('latin1','ignore') txt=txt.replace('\r','\r').replace('\n','\n\n') print(label,txt[:500]) PY

Investigating potential exploit attack pattern

There are many scan attempts, possibly targeting the "rememberMe" cookie. I'm suspecting an Apache Shiro deserialization exploit. My next steps: find successful exploit instances and the resulting commands or flags. I'll look for login attempts or endpoints with 200 responses and suspicious bodies. Parsing individual HTTP requests might be better, especially for catching serialized payloads or exfiltration attempts. I'll also search for strings like "dart" in those responses.

bash -lc strings -a /mnt/data/traffic_hunt.pcapng | grep -i -n 'dart{' | head -20

bash -lc strings -a /mnt/data/traffic_hunt.pcapng | grep -i -E 'flag{|dart|rememberMe|JSESSIONID|cmd=|whoami|id|pwd|cat /|bash -c|wget|curl|nc ' | head -200

Improving stream parsing for potential exploit attempts

The search results aren’t sufficient yet, so I need to improve how I parse the streams. The "rememberMe" cookie likely signals Shiro exploit attempts, so I’ll focus on identifying a successful attempt where "rememberMe" remains instead of being replaced with "deleteMe." I’ll check for 200 OK responses on requests related to this cookie, extract all relevant requests, and analyze response statuses. By parsing HTTP requests sequentially with better reassembly and using dpkt parsers, I can spot potential brute-force key behavior.

bash -lc python - <<'PY' import dpkt, socket, re from collections import defaultdict, Counter path='/mnt/data/traffic_hunt.pcapng' streams=defaultdict(lambda:{0:[],1:[]}) with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except Exception: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if tcp.dport==8080: streams[(sip,tcp.sport,dip,8080)][0].append((tcp.seq,bytes(tcp.data))) elif tcp.sport==8080: streams[(dip,tcp.dport,sip,8080)][1].append((tcp.seq,bytes(tcp.data)))

def reassemble(parts): # overlap-aware simplistic reassembly segs=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in segs: if cur is None: out.extend(data); cur=seq+len(data); continue if seq >= cur: out.extend(data); cur=seq+len(data) else: overlap=cur-seq if overlap < len(data): out.extend(data[overlap:]); cur=seq+len(data) return bytes(out)

entries=[] for k,v in streams.items(): c2s=reassemble(v[0]); s2c=reassemble(v[1]) if b'rememberMe=' in c2s or b'/login' in c2s or b'POST /login' in c2s or b'cmd=' in c2s or b'echo ' in c2s: # count statuses statuses=re.findall(br'HTTP/1.1 (\d+)', s2c) reqs=re.findall(br'((?:GET|POST) [^\r\n]+ HTTP/1.1\r\n(?:.?\r\n)?\r\n(?:[^GPH].*?)?)(?=(?:GET|POST|HTTP/1.1)|\Z)', c2s, re.S) entries.append((k, c2s, s2c, statuses))

print('candidate streams',len(entries)) for k,c2s,s2c,statuses in entries[:20]: print('\nSTREAM',k,'statuses',statuses[:10]) m=re.search(br'Cookie: rememberMe=([^\r\n;]+)', c2s) if m: print('rememberMe',m.group(1)[:80]) print(c2s[:1000].decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) print('RESP', s2c[:500].decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) PY

Narrowing search to focus on rememberMe cookie

I need to focus the search more specifically on the "rememberMe" cookie. It looks like the initial candidate listing didn’t show results because those records weren’t among the first items. I’ll now refocus on finding all streams with the "rememberMe" cookie and look for status patterns, especially ones that aren’t "deleteMe" or show a "200 OK" response.

bash -lc python - <<'PY' import dpkt, socket, re from collections import defaultdict path='/mnt/data/traffic_hunt.pcapng' streams=defaultdict(lambda:{0:[],1:[]}) with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except Exception: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if tcp.dport==8080: streams[(sip,tcp.sport,dip,8080)][0].append((tcp.seq,bytes(tcp.data))) elif tcp.sport==8080: streams[(dip,tcp.dport,sip,8080)][1].append((tcp.seq,bytes(tcp.data)))

def reassemble(parts): segs=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in segs: if cur is None: out.extend(data); cur=seq+len(data); continue if seq >= cur: out.extend(data); cur=seq+len(data) else: overlap=cur-seq if overlap < len(data): out.extend(data[overlap:]); cur=seq+len(data) return bytes(out)

hits=[] for k,v in streams.items(): c2s=reassemble(v[0]); s2c=reassemble(v[1]) if b'rememberMe=' in c2s: rems=re.findall(br'rememberMe=([^\r\n;]+)', c2s) statuses=re.findall(br'HTTP/1.1 (\d+)', s2c) delete=b'rememberMe=deleteMe' in s2c hits.append((k,len(c2s),len(s2c),len(rems),statuses[:10],delete,rems[0][:50])) print('remember streams',len(hits)) for x in hits[:50]: print(x) PY

bash -lc python - <<'PY' import dpkt, socket, re from collections import defaultdict path='/mnt/data/traffic_hunt.pcapng' streams=defaultdict(lambda:{0:[],1:[]}) with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except Exception: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if tcp.dport==8080: streams[(sip,tcp.sport,dip,8080)][0].append((tcp.seq,bytes(tcp.data))) elif tcp.sport==8080: streams[(dip,tcp.dport,sip,8080)][1].append((tcp.seq,bytes(tcp.data)))

def reassemble(parts): segs=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in segs: if cur is None: out.extend(data); cur=seq+len(data); continue if seq >= cur: out.extend(data); cur=seq+len(data) else: overlap=cur-seq if overlap < len(data): out.extend(data[overlap:]); cur=seq+len(data) return bytes(out)

for port in [56916,57066]: k=('10.1.243.155',port,'10.1.33.69',8080) c2s=reassemble(streams[k][0]); s2c=reassemble(streams[k][1]) print('\n=== STREAM',port,'===') reqs=re.split(br'(?=(?:GET|POST|PUT|DELETE|HEAD|OPTIONS) /)', c2s) resps=re.split(br'(?=HTTP/1.1 )', s2c) for i,r in enumerate(reqs[:10]): if not r.strip(): continue m=re.match(br'([A-Z]+) ([^ ]+) HTTP/1.1', r) print('REQ',i,m.groups() if m else r[:50]) cm=re.search(br'Cookie: ([^\r\n]+)', r) if cm: print(' cookie',cm.group(1)[:120]) print('statuses counts', {s.decode():s2c.count(b'HTTP/1.1 '+s) for s in set(re.findall(br'HTTP/1.1 (\d+)',s2c))}) print('contains non-delete rememberMe in resp?', bool(re.search(br'Set-Cookie: rememberMe=(?!deleteMe)', s2c))) # print snippets around any 200 responses for m in re.finditer(br'HTTP/1.1 200 .*?\r\n\r\n', s2c, re.S): st=max(0,m.start()-0); en=min(len(s2c), m.end()+300) print('200 snippet:\n', s2c[st:en].decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) break # show last requests/responses print('LAST REQ SNIP') print(c2s[-2500:].decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) print('LAST RESP SNIP') print(s2c[-2500:].decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) PY

Exploring exploit results and command outputs

We’ve got a successful exploit and command execution in stream 57066. Now, I need to track the flag, which is likely hidden within further commands after the exploit. I should decode the commands and associated outputs, checking for serialized payloads, base64 encodings, and possible command patterns in request parameters. Then, I’ll reconstruct the sequence of responses for better clarity.

Tracking potential flag in subsequent traffic

I now need to find the flag, which might be in a file accessible after a shell is deployed or created through the process. I'll review later streams after 05:57:43 and look for requests yielding 200 responses, possibly with $$$ wrappers or under specific paths. These could highlight webshell access. Let’s search for any patterns across the entire pcap.

bash -lc strings -a /mnt/data/traffic_hunt.pcapng | grep -n '$$$|Success|jsp?pwd|cmd=' | head -200

Identifying exploit and flag in subsequent streams

I need to search for POST requests after the exploit, potentially with a webshell upload. I should parse the request sequence to identify triggers and check for base64 responses to uncover any commands. Flagging the webshell path, maybe /favicondemo.ico, and inspecting packets around 05:57:43 could help. I'll focus on analyzing all relevant streams chronologically.

Identifying flag through webshell traffic

I suspect there’s just one victim/client involved, with an exploit tool scanning and then moving to post-exploitation. It seems like there could be a flag in large base64 chunks in responses, such as those from /favicondemo.ico, possibly indicating encrypted traffic or shell management. I'll need to inspect these responses closely, particularly any decoded ones, to find the key flag.### Investigating webshell communication

It seems like the communication could be related to a Godzilla JSP webshell, using POST requests with parameters and maybe AES/MD5 encryption. The response might be base64-encoded, and we’re looking for a success marker like "->|Success|<-". I’ll search for any interactions, especially around /favicondemo.ico, to detect potential passwords or keys for decryption.

bash -lc python - <<'PY' import dpkt, socket, re from collections import defaultdict path='/mnt/data/traffic_hunt.pcapng' flows=[] with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP) or tcp.dport!=8080: continue data=bytes(tcp.data) if b'/favicondemo.ico' in data or b'/favicon' in data or b'POST /' in data[:100]: sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) flows.append((ts,sip,tcp.sport,dip,tcp.dport,data[:1500])) except Exception: continue print('matches',len(flows)) for ts,sip,sport,dip,dport,data in flows[:20]: print('\nTS',ts,'flow',sip,sport,'->',dip,dport) print(data.decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) PY

Investigating /favicondemo.ico requests

I need to specifically search for requests related to /favicondemo.ico. This could help identify the important interactions with the webshell, especially considering its potential use in the command and control process. I'll track these requests to see if there's anything noteworthy in the parameters or responses.

bash -lc python - <<'PY' import dpkt, socket path='/mnt/data/traffic_hunt.pcapng' count=0 with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP) or tcp.dport!=8080: continue except Exception: continue data=bytes(tcp.data) if b'/favicondemo.ico' in data: count+=1 print('TS',ts,'sport',tcp.sport) print(data[:1200].decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) print('---') if count>=20: break print('count',count) PY

Inspecting POST /favicondemo.ico and related traffic

I need to inspect the POST requests to /favicondemo.ico, especially around stream 57066. These might contain serialized exploit data, with the potential to reveal commands or shell traffic. I'll check for command outputs like root or ls, and if the data is encrypted, I'll look into decrypting it. I'll also search for /1nt1.ico since it might point to a shell loader. Further inspection of request bodies and response patterns will help clarify the webshell's nature.

bash -lc python - <<'PY' import dpkt, socket from collections import defaultdict path='/mnt/data/traffic_hunt.pcapng'

collect full c2s for flow 52023 only

parts=[] with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except Exception: continue if socket.inet_ntoa(ip.src)=='10.1.243.155' and socket.inet_ntoa(ip.dst)=='10.1.33.69' and tcp.sport==52023 and tcp.dport==8080: parts.append((tcp.seq,bytes(tcp.data)))

reassemble

parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: overlap=cur-seq if overlap<len(data): out.extend(data[overlap:]); cur=seq+len(data) raw=bytes(out)

split requests

chunks=raw.split(b'POST /favicondemo.ico HTTP/1.1') print('count',len(chunks)-1) for i,ch in enumerate(chunks[1:6],1): req=b'POST /favicondemo.ico HTTP/1.1'+ch hdr,body=req.split(b'\r\n\r\n',1) body=body[:200] print('\nREQ',i) print(hdr.decode('latin1','ignore')) print('body first 200 bytes:', body[:200]) print('body as latin1:', body[:200].decode('latin1','ignore')) PY

Inspecting /1nt1.ico and potential shell client

I think the bodies could be base64-encoded, maybe even AES-encrypted. I need to focus on gathering the key, which might be embedded in the /1nt1.ico page or its response bodies. I’ll search for /1nt1.ico, looking for responses that might include embedded JavaScript for a shell client.

bash -lc python - <<'PY' import dpkt, socket path='/mnt/data/traffic_hunt.pcapng' for ts,buf in dpkt.pcapng.Reader(open(path,'rb')): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except Exception: continue data=bytes(tcp.data) if b'GET /1nt1.ico ' in data or b'/1nt1.ico' in data and tcp.dport==8080: print('REQ TS',ts,'sport',tcp.sport) print(data[:1200].decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) if b'HTTP/1.1' in data and b'1nt1.ico' in data and tcp.sport==8080: print('RESP TS',ts,'dport',tcp.dport) print(data[:1200].decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) PY

Decrypting /favicondemo.ico traffic and webshell analysis

I need to decrypt traffic related to /favicondemo.ico since it could be tied to the Behinder JSP shell. The POST bodies are base64 strings, and the response seems encrypted, possibly using AES with the key derived from an MD5 password. I also need to find where the web shell is uploaded—maybe under /1nt1.ico or /favicondemo.ico. I'll review server responses around the "Success" marker and command outputs to locate any uploads.

Deriving commands and inferring shell usage

I’m thinking perhaps we can infer commands from their outputs or payload sizes, though the encryption complicates direct extraction. If we observe /favicondemo.ico in later processes, we might be able to infer the path. To investigate, I'll search for a GET request to /favicondemo.ico followed by a POST request and check the response.

bash -lc python - <<'PY' import dpkt, socket, re from collections import defaultdict path='/mnt/data/traffic_hunt.pcapng'

find any GET/POST for paths containing nt1 or favicondemo and response status

sessions=defaultdict(lambda:{0:[],1:[]}) with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if tcp.dport==8080: sessions[(sip,tcp.sport,dip,8080)][0].append((tcp.seq,bytes(tcp.data),ts)) elif tcp.sport==8080: sessions[(dip,tcp.dport,sip,8080)][1].append((tcp.seq,bytes(tcp.data),ts))

def reasm(parts): segs=sorted([(s,d,ts) for s,d,ts in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data,ts in segs: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) return bytes(out) for k,v in sessions.items(): c2s=reasm(v[0]); s2c=reasm(v[1]) if b'1nt1.ico' in c2s or b'favicondemo.ico' in c2s: print('\nSTREAM',k) # print request lines and first status for m in re.finditer(br'(GET|POST) ([^ ]+) HTTP/1.1', c2s): p=m.group(2) if b'1nt1.ico' in p or b'favicondemo.ico' in p: print('REQ',p) print('statuses',re.findall(br'HTTP/1.1 (\d+)', s2c)[:20]) print('resp start', s2c[:300].decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) PY

Decrypting responses and tracing webshell framework

I need to decrypt and identify the webshell framework by looking at response markers. The initial chunk length suggests AES ciphertext, with subsequent responses looking like keepalive messages. It may help to track upload success, and possibly correlate with commands sent via the suspected exploit tool, with signs pointing to a memory shell injection.

Analyzing exploit stream and understanding success

I see that the request body in the last exploit is massive, containing URL-encoded Java class bytes. The success response suggests a memory shell injection, not a file upload. As the injected shell path corresponds to /favicondemo.ico, I need to inspect the request body to extract parameters like passwords or keys for the memshell. I will parse the POST requests and responses to map out the successful exploitation step by step.

bash -lc python - <<'PY' import dpkt, socket, re, urllib.parse from collections import defaultdict path='/mnt/data/traffic_hunt.pcapng' parts=[]; res=[] with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue if socket.inet_ntoa(ip.src)=='10.1.243.155' and socket.inet_ntoa(ip.dst)=='10.1.33.69' and tcp.sport==57066 and tcp.dport==8080: parts.append((tcp.seq,bytes(tcp.data))) elif socket.inet_ntoa(ip.src)=='10.1.33.69' and socket.inet_ntoa(ip.dst)=='10.1.243.155' and tcp.sport==8080 and tcp.dport==57066: res.append((tcp.seq,bytes(tcp.data)))

def reasm(parts): segs=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in segs: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) return bytes(out) req=reasm(parts); resp=reasm(res) print('req len',len(req),'resp len',len(resp))

split requests by method

indices=[m.start() for m in re.finditer(br'(?m)^(GET|POST) ',req)] + [len(req)] for i in range(min(len(indices)-1,30)): seg=req[indices[i]:indices[i+1]] line=seg.split(b'\r\n',1)[0] print('\nREQ',i+1,line.decode('latin1','ignore')) if b'\r\n\r\n' in seg: hdr,body=seg.split(b'\r\n\r\n',1) if body[:1]: print('body prefix', body[:200].decode('latin1','ignore')) if b'=' in body[:500]: parsed=urllib.parse.parse_qs(body.decode('latin1','ignore'), keep_blank_values=True) keys=list(parsed.keys())[:20] print('params keys', keys) for k in keys[:10]: v=parsed[k][0] print(' ',k, '=>', v[:120]) PY

bash -lc python - <<'PY' import dpkt, socket, re, urllib.parse from collections import defaultdict path='/mnt/data/traffic_hunt.pcapng' parts=[]; res=[] with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue if socket.inet_ntoa(ip.src)=='10.1.243.155' and socket.inet_ntoa(ip.dst)=='10.1.33.69' and tcp.sport==57066 and tcp.dport==8080: parts.append((tcp.seq,bytes(tcp.data))) elif socket.inet_ntoa(ip.src)=='10.1.33.69' and socket.inet_ntoa(ip.dst)=='10.1.243.155' and tcp.sport==8080 and tcp.dport==57066: res.append((tcp.seq,bytes(tcp.data)))

def reasm(parts): segs=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in segs: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) return bytes(out) req=reasm(parts); resp=reasm(res) indices=[m.start() for m in re.finditer(br'(?m)^(GET|POST) ',req)] + [len(req)] print('num req',len(indices)-1) for i in range(len(indices)-1): seg=req[indices[i]:indices[i+1]] line=seg.split(b'\r\n',1)[0] if line.startswith(b'POST'): print('\nREQ',i+1,line.decode('latin1','ignore')) hdr,body=seg.split(b'\r\n\r\n',1) print('headers:\n',hdr.decode('latin1','ignore')) print('body len',len(body),'prefix',body[:300].decode('latin1','ignore')) parsed=urllib.parse.parse_qs(body.decode('latin1','ignore'), keep_blank_values=True) print('params',list(parsed)[:20]) for k,v in list(parsed.items())[:10]: print(' ',k,'=>',v[0][:200]) PY

Decrypting traffic and parsing the parameters

I’ve noticed that the POST request (78) includes a p parameter which could be a password, specifically HWmc2TLDoihdlr0N. It looks like this could potentially be used as an AES key for decryption. The body has a strange multiline format, so I’ll manually parse it, checking around the parameters. I'll examine the tail of the body to confirm what’s going on.

bash -lc python - <<'PY' import dpkt, socket, re path='/mnt/data/traffic_hunt.pcapng' parts=[] with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue if socket.inet_ntoa(ip.src)=='10.1.243.155' and socket.inet_ntoa(ip.dst)=='10.1.33.69' and tcp.sport==57066 and tcp.dport==8080: parts.append((tcp.seq,bytes(tcp.data)))

reasm

parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) req=bytes(out)

isolate last request body

m=list(re.finditer(br'POST / HTTP/1.1', req))[-1] seg=req[m.start():] hdr,body=seg.split(b'\r\n\r\n',1) print('body head decoded:\n', body[:1500].decode('latin1','ignore')) print('\nbody tail decoded:\n', body[-1000:].decode('latin1','ignore')) PY

Decoding class file and extracting strings

It looks like the POST body contains a URL-encoded base64 string of a Java class file. I need to first URL-decode and then base64-decode the string. The class name com/summersec/x/BehinderFilter seems to appear in the decoded body, confirming the use of the Behinder Filter webshell. I'll look for relevant constants like password Pwd and path in the decoded class data.

Trying to clarify how decoded base64 class bytes reveal strings

After URL-decoding the base64, it’s showing strings like com/summersec/x/BehinderFilter, which are part of the class file's constant pool. It seems that parts of the base64-text might decode into readable text, but not everything was fully decoded. I’ll now narrow down the search to parameters in the headers, focusing on 'path' and 'p.'

bash -lc python - <<'PY' import dpkt, socket, re, urllib.parse, base64 path='/mnt/data/traffic_hunt.pcapng' parts=[] with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue if socket.inet_ntoa(ip.src)=='10.1.243.155' and socket.inet_ntoa(ip.dst)=='10.1.33.69' and tcp.sport==57066 and tcp.dport==8080: parts.append((tcp.seq,bytes(tcp.data))) parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) req=bytes(out) seg=req[list(re.finditer(br'POST / HTTP/1.1', req))[-1].start():] hdr,body=seg.split(b'\r\n\r\n',1) text=body.decode('latin1','ignore') for pat in ['p:', 'path:', 'BehinderFilter', 'HWmc2TLDoihdlr0N', '/favicondemo.ico', 'user=']: idx=text.find(pat) print(pat, idx) if idx!=-1: print(text[max(0,idx-100):idx+300]) print('---') PY

Exploring potential causes for output anomalies

It seems the previous output might have printed parameters due to a splitting issue where text from a response or a different request got mixed in. I’ll be careful here. For now, the class name seems clear, and I’ll try searching the raw bytes (like HWmc2...) from the pcap file to trace it further.

bash -lc grep -aob 'HWmc2TLDoihdlr0N|favicondemo.ico|BehinderFilter|com/summersec/x/BehinderFilter' /mnt/data/traffic_hunt.pcapng | head -50

Planning decryption and inspection process

I need to figure out the decryption strategy. I might search for the BehinderFilter source, potentially from SummerSec’s memshell. These often use a p parameter for passwords, which might be an MD5 hash. The class file bytes are base64-encoded, so extracting and decompiling them could reveal the password in the constants. Inspecting the headers like p and path is also essential!

Inspecting shell password header

I think the p header could be a password or an AES key. To be sure, I need to inspect the actual request header around this and see if it contains p, path, and maybe even u. I'll print the full header near the end of the request to confirm the details.

bash -lc python - <<'PY' import dpkt, socket, re path='/mnt/data/traffic_hunt.pcapng' parts=[] with open(path,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue if socket.inet_ntoa(ip.src)=='10.1.243.155' and socket.inet_ntoa(ip.dst)=='10.1.33.69' and tcp.sport==57066 and tcp.dport==8080: parts.append((tcp.seq,bytes(tcp.data))) parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) req=bytes(out) indices=[m.start() for m in re.finditer(br'(?m)^(GET|POST) ',req)] + [len(req)] seg=req[indices[-2]:indices[-1]]

print first 2500 bytes raw escaped

print(seg[:2500].decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) PY

Inspecting headers using grep

I need the full output, especially around the lines after the cookie headers. It might be easier if I save the stream and use grep to search for the "p:" header within it. I’ll use Python to write the raw stream to a file and then perform the grep search for targeted inspection.

bash -lc python - <<'PY' import dpkt,socket,re parts=[] with open('/mnt/data/traffic_hunt.pcapng','rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue if socket.inet_ntoa(ip.src)=='10.1.243.155' and socket.inet_ntoa(ip.dst)=='10.1.33.69' and tcp.sport==57066 and tcp.dport==8080: parts.append((tcp.seq,bytes(tcp.data))) parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) open('/tmp/57066_req.bin','wb').write(bytes(out)) print('wrote',len(out)) PY python - <<'PY' import os b=open('/tmp/57066_req.bin','rb').read() for pat in [b'\np:', b'\r\np:', b'path:', b'HWmc2TLDoihdlr0N', b'favicondemo.ico']: idx=b.find(pat) print(pat, idx) if idx!=-1: s=max(0,idx-200); e=min(len(b),idx+400) print(b[s:e].decode('latin1','ignore').replace('\r','\r').replace('\n','\n\n')) print('---') PY

Decrypting Behinder traffic

I suspect the "p" header is the key, and the string HWmc2TLDoihdlr0N may be the password. Behinder 3.x uses AES encryption with a 16-character password as the key, and the payload is base64-encoded. I need to decrypt the responses and requests using AES with this key, specifically testing the /favicondemo.ico response. I’ll use pycryptodome to try it out.

bash -lc python - <<'PY' from Crypto.Cipher import AES print('ok') PY

Installing pycryptodome

I’ll need to make sure I install the pycryptodome library to properly decrypt the AES-encoded responses. This will help me work with the encryption scheme used by Behinder, allowing me to test decrypting the class or data from the /favicondemo.ico request. Time to move forward with that.

bash -lc python -m pip install pycryptodome -q && python - <<'PY' from Crypto.Cipher import AES print('ok') PY

Decrypting and parsing the response

I’ll need to parse the response body and decrypt it using the AES-ECB key from the p parameter. I’ll start with the first response, which is the 52023 one, and extract the body in base64 format. Then, I’ll decrypt it to see if it reveals useful data. Time to reassemble and proceed cautiously.

bash -lc python - <<'PY' from Crypto.Cipher import AES from base64 import b64decode import dpkt,socket,re

extract first response body of stream52023

parts=[] with open('/mnt/data/traffic_hunt.pcapng','rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue if socket.inet_ntoa(ip.src)=='10.1.33.69' and socket.inet_ntoa(ip.dst)=='10.1.243.155' and tcp.sport==8080 and tcp.dport==52023: parts.append((tcp.seq,bytes(tcp.data))) parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) raw=bytes(out)

first response body chunk

m=re.search(br'HTTP/1.1 200 .*?\r\n\r\n([0-9a-fA-F]+)\r\n([^\r]+)\r\n0\r\n\r\n', raw, re.S) print('found', bool(m)) size=int(m.group(1),16) body=m.group(2) print('chunk size hdr', size, 'actual len', len(body)) print('body', body[:80]) key=b'HWmc2TLDoihdlr0N' ct=b64decode(body) print('ct len', len(ct)) pt=AES.new(key,AES.MODE_ECB).decrypt(ct) print('pt prefix raw', pt[:100]) print('pt hex', pt[:64].hex()) print('pt as latin1', pt[:200].decode('latin1','ignore')) PY

Testing decryption approach

It seems the AES ECB decryption didn't output readable plaintext, which suggests the data might be encrypted with a different protocol or method, such as XOR or gzip. Alternatively, it could be using AES CBC mode. I’ll check if the decrypted request body is a Java class file by looking for the magic cafebabe header.

bash -lc python - <<'PY' from Crypto.Cipher import AES from base64 import b64decode import dpkt,socket,re

first POST /favicondemo.ico body from 52023

parts=[] with open('/mnt/data/traffic_hunt.pcapng','rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue if socket.inet_ntoa(ip.src)=='10.1.243.155' and socket.inet_ntoa(ip.dst)=='10.1.33.69' and tcp.sport==52023 and tcp.dport==8080: parts.append((tcp.seq,bytes(tcp.data))) parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) raw=bytes(out) m=re.search(br'POST /favicondemo.ico HTTP/1.1.*?\r\n\r\n([^A-Z]{0,0}[A-Za-z0-9+/=]+)', raw, re.S) body=m.group(1) print('body len',len(body), body[:50]) key=b'HWmc2TLDoihdlr0N' ct=b64decode(body) print('ct len',len(ct), 'mod', len(ct)%16) pt=AES.new(key,AES.MODE_ECB).decrypt(ct) print('pt first 32 hex',pt[:32].hex()) print('starts cafebabe?', pt[:4]==b'\xca\xfe\xba\xbe') print('starts gzip?', pt[:2]==b'\x1f\x8b') print('ascii snippet', pt[:200].decode('latin1','ignore')) PY

Testing key derivation strategies

I’m testing different potential key derivations for the AES key using the value from the "p" header. This includes using the MD5 of "p", the first 16 bytes of it, and checking for other possible transformations, like XOR with session data. The goal is to see if the decrypted request contains recognizable class file data.

bash -lc python - <<'PY' from base64 import b64decode from hashlib import md5 from Crypto.Cipher import AES import dpkt,socket,re,itertools

get first request body and first response body from stream 52023

req_parts=[]; resp_parts=[] with open('/mnt/data/traffic_hunt.pcapng','rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if (sip,dip,tcp.sport,tcp.dport)==('10.1.243.155','10.1.33.69',52023,8080): req_parts.append((tcp.seq,bytes(tcp.data))) if (sip,dip,tcp.sport,tcp.dport)==('10.1.33.69','10.1.243.155',8080,52023): resp_parts.append((tcp.seq,bytes(tcp.data))) def reasm(parts): parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]); out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) return bytes(out) reqraw=reasm(req_parts); respraw=reasm(resp_parts) reqm=re.search(br'POST /favicondemo.ico HTTP/1.1.?\r\n\r\n([A-Za-z0-9+/=]+)', reqraw, re.S) respm=re.search(br'HTTP/1.1 200 .?\r\n\r\n([0-9a-fA-F]+)\r\n([^\r]+)\r\n0\r\n\r\n', respraw, re.S) reqct=b64decode(reqm.group(1)); respct=b64decode(respm.group(2)) p='HWmc2TLDoihdlr0N' cands={ 'raw': p.encode(), 'md5hex16': md5(p.encode()).hexdigest()[:16].encode(), 'md5raw': md5(p.encode()).digest(), 'md5hexupper16': md5(p.encode()).hexdigest()[:16].upper().encode(), 'md5hexlast16': md5(p.encode()).hexdigest()[16:].encode(), } for name,key in cands.items(): if len(key)!=16: continue for target,ct in [('req',reqct),('resp',respct)]: try: pt=AES.new(key,AES.MODE_ECB).decrypt(ct[:len(ct)//16*16]) except Exception as e: print(name,target,'err',e); continue score=sum(32<=b<127 or b in (9,10,13) for b in pt[:200]) print(name,target,'score',score,'first4',pt[:4].hex(),'ascii',pt[:80].decode('latin1','ignore')) PY

Decrypting responses and understanding protocol

Great, the decryption works using md5(p)[:16]. The response seems to be JSON, including base64 values for status and msg. I’ll need to decode these, and maybe the msg is compressed or further encoded. Each request appears to be a Java class payload, possibly requiring deserialization. Let's investigate further!

Decoding class bytes and inspecting responses

It looks like the response might contain base64-encoded JSON messages. The payload seems to be class bytes, which are a bit tricky to interpret. I suspect we can decode the status and msg fields after decryption. I'll automate a process to decrypt and decode all responses and check if we can extract the flag.

bash -lc python - <<'PY' from hashlib import md5 from Crypto.Cipher import AES from base64 import b64decode import gzip,zlib,json,re,dpkt,socket from collections import defaultdict key=md5(b'HWmc2TLDoihdlr0N').hexdigest()[:16].encode() print('key',key)

parse all requests/responses for stream 52023

req_parts=[]; resp_parts=[] with open('/mnt/data/traffic_hunt.pcapng','rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if (sip,dip,tcp.sport,tcp.dport)==('10.1.243.155','10.1.33.69',52023,8080): req_parts.append((tcp.seq,bytes(tcp.data))) if (sip,dip,tcp.sport,tcp.dport)==('10.1.33.69','10.1.243.155',8080,52023): resp_parts.append((tcp.seq,bytes(tcp.data))) def reasm(parts): parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]); out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) return bytes(out) reqraw=reasm(req_parts); respraw=reasm(resp_parts)

split requests and responses simple regex

reqs=[] for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1.?\r\n\r\n([A-Za-z0-9+/=]+)', reqraw, re.S): reqs.append(m.group(1)) resps=[] for m in re.finditer(br'HTTP/1.1 200 .?\r\n\r\n([0-9a-fA-F]+)\r\n([^\r]+)\r\n0\r\n\r\n', respraw, re.S): resps.append(m.group(2)) print('counts',len(reqs),len(resps))

def unpad(x): pad=x[-1] return x[:-pad] if 0 < pad <= 16 else x for i,body in enumerate(resps[:15],1): ct=b64decode(body) pt=unpad(AES.new(key,AES.MODE_ECB).decrypt(ct)) print('\nRESP',i,'raw prefix',pt[:120]) try: j=json.loads(pt.decode('utf-8','ignore')) status=b64decode(j['status']).decode('utf-8','ignore') msg_bytes=b64decode(j['msg']) print(' status',status,'msg len',len(msg_bytes),'msg hex',msg_bytes[:16].hex(),'ascii',msg_bytes[:120].decode('utf-8','ignore')) # try ungzip/zlib for fn,name in [(gzip.decompress,'gzip'),(zlib.decompress,'zlib')]: try: d=fn(msg_bytes) print(' ',name,'->',d[:200]) except: pass except Exception as e: print(' json err',e) PY

Decoding responses and searching for flags

The responses are decoding successfully, but now I need to process all of them, as the flag might be in one of the responses. Each message seems to be JSON or arrays with base64-encoded fields, so I’ll recursively decode until I find anything interesting like flag strings or file paths. Let’s focus on decoding and searching for those keywords.

bash -lc python - <<'PY' from hashlib import md5 from Crypto.Cipher import AES from base64 import b64decode import json,re,dpkt,socket from collections import defaultdict key=md5(b'HWmc2TLDoihdlr0N').hexdigest()[:16].encode() flows=[52023,52027,52028,52029,52030,52031,52032,52033,52034,52035,52036] all_msgs=[] with open('/mnt/data/traffic_hunt.pcapng','rb') as f: packets=list(dpkt.pcapng.Reader(f))

def reasm(parts): parts=sorted([(s,d,ts) for s,d,ts in parts if d], key=lambda x:x[0]); out=bytearray(); cur=None for seq,data,ts in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) return bytes(out) for port in flows: req_parts=[]; resp_parts=[] for ts,buf in packets: try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if (sip,dip,tcp.sport,tcp.dport)==('10.1.243.155','10.1.33.69',port,8080): req_parts.append((tcp.seq,bytes(tcp.data),ts)) if (sip,dip,tcp.sport,tcp.dport)==('10.1.33.69','10.1.243.155',8080,port): resp_parts.append((tcp.seq,bytes(tcp.data),ts)) reqraw=reasm(req_parts); respraw=reasm(resp_parts) for i,m in enumerate(re.finditer(br'HTTP/1.1 200 .*?\r\n\r\n([0-9a-fA-F]+)\r\n([^\r]+)\r\n0\r\n\r\n', respraw, re.S),1): body=m.group(2) ct=b64decode(body) pt=AES.new(key,AES.MODE_ECB).decrypt(ct) pad=pt[-1] if 0<pad<=16: pt=pt[:-pad] try: j=json.loads(pt.decode('utf-8','ignore')) status=b64decode(j['status']).decode('utf-8','ignore') if 'status' in j else '' msg=b64decode(j['msg']) if 'msg' in j else b'' all_msgs.append((port,i,status,msg,pt)) except Exception as e: all_msgs.append((port,i,'ERR',pt,pt))

print('total msgs',len(all_msgs))

search raw decoded msg for keywords or flag-like

for port,i,status,msg,pt in all_msgs: txt=msg.decode('utf-8','ignore') if re.search(r'dart{|flag{|/flag|flag|ctf', txt, re.I): print('\nHIT',port,i,status) print(txt[:2000])

print summary of all message lengths

print('lengths',[(port,i,len(msg)) for port,i,,msg, in all_msgs[:40]]) PY

Rerunning script to confirm result

It seems like there's no output, which might indicate a script error. I’ll take a more modest approach and rerun the process to confirm whether everything works as expected. Let’s debug step by step and make sure I can get the right results.

bash -lc python - <<'PY' from hashlib import md5 from Crypto.Cipher import AES from base64 import b64decode import json,re,dpkt,socket key=md5(b'HWmc2TLDoihdlr0N').hexdigest()[:16].encode() ports=[52023,52027,52028,52029,52030,52031,52032,52033,52034,52035,52036] packets=list(dpkt.pcapng.Reader(open('/mnt/data/traffic_hunt.pcapng','rb')))

def reasm(parts): parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]); out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) return bytes(out) all_msgs=[] for port in ports: req_parts=[]; resp_parts=[] for ts,buf in packets: try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if (sip,dip,tcp.sport,tcp.dport)==('10.1.243.155','10.1.33.69',port,8080): req_parts.append((tcp.seq,bytes(tcp.data))) if (sip,dip,tcp.sport,tcp.dport)==('10.1.33.69','10.1.243.155',8080,port): resp_parts.append((tcp.seq,bytes(tcp.data))) respraw=reasm(resp_parts) count=0 for m in re.finditer(br'HTTP/1.1 200 .*?\r\n\r\n([0-9a-fA-F]+)\r\n([^\r]+)\r\n0\r\n\r\n', respraw, re.S): body=m.group(2) ct=b64decode(body) pt=AES.new(key,AES.MODE_ECB).decrypt(ct) pad=pt[-1] if 0<pad<=16: pt=pt[:-pad] try: j=json.loads(pt.decode('utf-8','ignore')) status=b64decode(j['status']).decode('utf-8','ignore') if 'status' in j else '' msg=b64decode(j['msg']) if 'msg' in j else b'' all_msgs.append((port,count+1,status,msg)) except Exception as e: all_msgs.append((port,count+1,'ERR',pt)) count+=1 print('port',port,'responses',count) print('total',len(all_msgs)) for item in all_msgs[:25]: port,i,status,msg=item if isinstance(msg, bytes): txt=msg.decode('utf-8','ignore') else: txt=str(msg) print('\n',port,i,status,'len',len(msg),'snippet',txt[:160].replace('\n','\n')) PY

Optimizing to avoid issues

It looks like there was no output, which might be due to the command hanging after reading many packets. That’s strange. It's unlikely the process was killed, so instead, let’s switch to a more optimized, one-pass approach to avoid this sort of issue and get the expected result.

bash -lc python - <<'PY' from hashlib import md5 from Crypto.Cipher import AES from base64 import b64decode import json,re,dpkt,socket from collections import defaultdict key=md5(b'HWmc2TLDoihdlr0N').hexdigest()[:16].encode() ports={52023,52027,52028,52029,52030,52031,52032,52033,52034,52035,52036} resp_parts=defaultdict(list) with open('/mnt/data/traffic_hunt.pcapng','rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if sip=='10.1.33.69' and dip=='10.1.243.155' and tcp.sport==8080 and tcp.dport in ports: resp_parts[tcp.dport].append((tcp.seq,bytes(tcp.data))) print('collected', {k:len(v) for k,v in resp_parts.items()})

def reasm(parts): parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) return bytes(out)

all_msgs=[] for port in sorted(resp_parts): raw=reasm(resp_parts[port]) ms=list(re.finditer(br'HTTP/1.1 200 .*?\r\n\r\n([0-9a-fA-F]+)\r\n([^\r]+)\r\n0\r\n\r\n', raw, re.S)) print('port',port,'matches',len(ms)) for idx,m in enumerate(ms,1): try: ct=b64decode(m.group(2)) pt=AES.new(key,AES.MODE_ECB).decrypt(ct) pad=pt[-1] if 0<pad<=16: pt=pt[:-pad] j=json.loads(pt.decode('utf-8','ignore')) status=b64decode(j['status']).decode('utf-8','ignore') if 'status' in j else '' msg=b64decode(j['msg']) if 'msg' in j else b'' all_msgs.append((port,idx,status,msg)) except Exception as e: all_msgs.append((port,idx,'ERR',str(e).encode())) print('total msgs',len(all_msgs)) for port,idx,status,msg in all_msgs[:30]: print(port,idx,status,len(msg),msg[:120].decode('utf-8','ignore').replace('\n','\n')) PY

Decoding and searching for flags

I’ll need to search earlier messages for any interesting outputs like file content. Some of these messages might not just be simple JSON and could contain download chunks. I’ll focus on decoding nested JSON arrays and recursive base64 decodings, searching for possible flag strings or file names like flags, "dart", or particular paths.

bash -lc python - <<'PY' from hashlib import md5 from Crypto.Cipher import AES from base64 import b64decode import json,re,dpkt,socket,base64 from collections import defaultdict key=md5(b'HWmc2TLDoihdlr0N').hexdigest()[:16].encode() ports={52023,52027,52028,52029,52030,52031,52032,52033,52034,52035} resp_parts=defaultdict(list) with open('/mnt/data/traffic_hunt.pcapng','rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if sip=='10.1.33.69' and dip=='10.1.243.155' and tcp.sport==8080 and tcp.dport in ports: resp_parts[tcp.dport].append((tcp.seq,bytes(tcp.data)))

def reasm(parts): parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]); out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) return bytes(out)

def unpad(pt): pad=pt[-1] return pt[:-pad] if 0<pad<=16 else pt

def maybe_b64(s): if not isinstance(s,str) or len(s)<4 or len(s)%4!=0: return None if not re.fullmatch(r'[A-Za-z0-9+/=]+', s): return None try: return base64.b64decode(s) except: return None

def decode_obj(o): if isinstance(o,dict): return {k: decode_obj(v) for k,v in o.items()} if isinstance(o,list): return [decode_obj(v) for v in o] if isinstance(o,str): b=maybe_b64(o) if b is not None: try: txt=b.decode('utf-8') # if decoded text is json, decode recursively if txt and txt[0] in '[{': try: return decode_obj(json.loads(txt)) except: pass return txt except: return b return o return o

interesting=[] all_decoded=[] for port in sorted(resp_parts): raw=reasm(resp_parts[port]) for idx,m in enumerate(re.finditer(br'HTTP/1.1 200 .*?\r\n\r\n([0-9a-fA-F]+)\r\n([^\r]+)\r\n0\r\n\r\n', raw, re.S),1): ct=b64decode(m.group(2)); pt=unpad(AES.new(key,AES.MODE_ECB).decrypt(ct)) try: j=json.loads(pt.decode('utf-8','ignore')) dec=decode_obj(j) except Exception: dec=pt all_decoded.append((port,idx,dec)) txt=json.dumps(dec, ensure_ascii=False, default=lambda x:x.decode('latin1','ignore') if isinstance(x,(bytes,bytearray)) else str(x)) if not isinstance(dec,(bytes,bytearray)) else dec.decode('latin1','ignore') if re.search(r'dart{|flag{|ctf{|flag|passwd|shadow|secret|token', txt, re.I): interesting.append((port,idx,txt[:3000])) print('interesting count',len(interesting)) for port,idx,txt in interesting[:30]: print('\nITEM',port,idx) print(txt)

print first 12 decoded objects summaries

for port,idx,dec in all_decoded[:12]: txt=json.dumps(dec, ensure_ascii=False, default=lambda x:x.decode('latin1','ignore') if isinstance(x,(bytes,bytearray)) else str(x)) if not isinstance(dec,(bytes,bytearray)) else dec.decode('latin1','ignore') print('\n---',port,idx,'---') print(txt[:1200]) PY

bash -lc python - <<'PY' from hashlib import md5 from Crypto.Cipher import AES from base64 import b64decode import json,re,dpkt,socket,base64 from collections import defaultdict key=md5(b'HWmc2TLDoihdlr0N').hexdigest()[:16].encode() ports={52023,52027,52028,52029,52030,52031,52032,52033,52034,52035} resp_parts=defaultdict(list) with open('/mnt/data/traffic_hunt.pcapng','rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if sip=='10.1.33.69' and dip=='10.1.243.155' and tcp.sport==8080 and tcp.dport in ports: resp_parts[tcp.dport].append((tcp.seq,bytes(tcp.data)))

def reasm(parts): parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]); out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) return bytes(out)

def unpad(pt): pad=pt[-1] return pt[:-pad] if 0<pad<=16 else pt

def maybe_b64(s): if not isinstance(s,str) or len(s)<4 or len(s)%4!=0: return None if not re.fullmatch(r'[A-Za-z0-9+/=]+', s): return None try: return base64.b64decode(s) except: return None

def decode_obj(o): if isinstance(o,dict): return {k: decode_obj(v) for k,v in o.items()} if isinstance(o,list): return [decode_obj(v) for v in o] if isinstance(o,str): b=maybe_b64(o) if b is not None: try: txt=b.decode('utf-8') if txt and txt[0] in '[{': try: return decode_obj(json.loads(txt)) except: pass return txt except: return b return o return o all_items=[] for port in sorted(resp_parts): raw=reasm(resp_parts[port]) for idx,m in enumerate(re.finditer(br'HTTP/1.1 200 .*?\r\n\r\n([0-9a-fA-F]+)\r\n([^\r]+)\r\n0\r\n\r\n', raw, re.S),1): ct=b64decode(m.group(2)); pt=unpad(AES.new(key,AES.MODE_ECB).decrypt(ct)) try: j=json.loads(pt.decode('utf-8','ignore')); dec=decode_obj(j) except: dec=pt.decode('latin1','ignore') txt=json.dumps(dec, ensure_ascii=False, default=lambda x:x.decode('latin1','ignore') if isinstance(x,(bytes,bytearray)) else str(x)) if not isinstance(dec,(bytes,bytearray)) else dec.decode('latin1','ignore') all_items.append((port,idx,len(txt),txt))

longest messages

for port,idx,l,txt in sorted(all_items,key=lambda x:x[2], reverse=True)[:40]: print('\nITEM',port,idx,'len',l) print(txt[:1600]) PY

Investigating potential file and flag

Response 62 contains something that looks like a file hash. In response 65, there's a pwd /var/tmp. In response 63, a list in /var/tmp has a file named out. It seems like this file might contain the flag. I should check responses around message 60-67 and any downloads related to those files to confirm.

Checking for file download chunks

I need to look at port 52027 to see if it corresponds to the download of the out file in chunks. The first response might just be a handshake, as it’s only 185 bytes. I'll focus on unique messages towards the end to figure out if any file chunks are involved.

bash -lc python - <<'PY' from hashlib import md5 from Crypto.Cipher import AES from base64 import b64decode import json,re,dpkt,socket,base64 from collections import defaultdict key=md5(b'HWmc2TLDoihdlr0N').hexdigest()[:16].encode() ports={52023,52027,52028,52029,52030,52031,52032,52033,52034,52035} resp_parts=defaultdict(list) with open('/mnt/data/traffic_hunt.pcapng','rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if sip=='10.1.33.69' and dip=='10.1.243.155' and tcp.sport==8080 and tcp.dport in ports: resp_parts[tcp.dport].append((tcp.seq,bytes(tcp.data)))

def reasm(parts): parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]); out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) return bytes(out)

def unpad(pt): pad=pt[-1] return pt[:-pad] if 0<pad<=16 else pt for port in sorted(resp_parts): raw=reasm(resp_parts[port]) ms=list(re.finditer(br'HTTP/1.1 200 .*?\r\n\r\n([0-9a-fA-F]+)\r\n([^\r]+)\r\n0\r\n\r\n', raw, re.S)) print('\nPORT',port,'count',len(ms)) for idx,m in list(enumerate(ms,1))[-8:]: ct=b64decode(m.group(2)); pt=unpad(AES.new(key,AES.MODE_ECB).decrypt(ct)) try: j=json.loads(pt.decode('utf-8','ignore')) status=b64decode(j['status']).decode() if 'status' in j else '' msg=b64decode(j['msg']) if 'msg' in j else b'' txt=msg.decode('utf-8','ignore') except Exception as e: txt=pt.decode('latin1','ignore') status='ERR' print(' idx',idx,'status',status,'len',len(txt),'snippet',repr(txt[:200])) PY

Analyzing messages for file downloads and operations

I need to understand the operations from requests, especially between indices 62-67. Response 62 might be an MD5 file hash, and the following responses could involve file operations, like listing directories. Response 1 might contain a file chunk disguised as an encrypted string. I'll check these decrypted requests for clues, too.

Identifying class names from requests and strings

I’m thinking of identifying class names by parsing requests using the constant pool or by searching for strings within decrypted class files. Each request might contain a payload class and parameters. I'll inspect decrypted class bytes, extracting class names via signatures or string patterns to infer operation names and sequences.

Analyzing payload classes for potential flag or upload

I’m considering that a FileOperation with mode=download and path=/var/tmp/out might give a clue about the contents—perhaps it’s the flag or uploaded data. I need to parse request class names and parameters to see if any embedded strings in the payload reveal paths or commands, especially in requests 60-67.

bash -lc python - <<'PY' from hashlib import md5 from Crypto.Cipher import AES from base64 import b64decode import re,dpkt,socket,subprocess,os key=md5(b'HWmc2TLDoihdlr0N').hexdigest()[:16].encode()

requests port52023 only

parts=[] with open('/mnt/data/traffic_hunt.pcapng','rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue if socket.inet_ntoa(ip.src)=='10.1.243.155' and socket.inet_ntoa(ip.dst)=='10.1.33.69' and tcp.sport==52023 and tcp.dport==8080: parts.append((tcp.seq,bytes(tcp.data))) parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) raw=bytes(out) reqs=[m.group(1) for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1.*?\r\n\r\n([A-Za-z0-9+/=]+)', raw, re.S)] print('req count',len(reqs)) os.makedirs('/tmp/reqs', exist_ok=True) for idx,b64 in enumerate(reqs,1): ct=b64decode(b64); pt=AES.new(key,AES.MODE_ECB).decrypt(ct) pad=pt[-1] if 0<pad<=16: pt=pt[:-pad] fn=f'/tmp/reqs/r{idx:02d}.bin' open(fn,'wb').write(pt)

inspect selected requests with strings

for idx in [1,2,3,4,5,6,7,8,9,10,60,61,62,63,64,65,66,67]: fn=f'/tmp/reqs/r{idx:02d}.bin' if not os.path.exists(fn): continue print('\nREQ',idx,'size',os.path.getsize(fn)) out=subprocess.check_output(['strings','-a',fn]).decode('latin1','ignore').splitlines()

filter short/common lines

filt=[x for x in out if len(x)>=3][:80] print('\n'.join(filt[:40])) PY

Improving request parsing

Some requests might be malformed due to regex over-capturing or newlines. To improve parsing, I’ll consider splitting requests by the Content-Length header and handling them in sequence. Using dpkt.http might work, but parsing sequentially through raw stream data might actually be easier, especially for port 52023.

bash -lc python - <<'PY' from hashlib import md5 from Crypto.Cipher import AES from base64 import b64decode import re,dpkt,socket,os,subprocess key=md5(b'HWmc2TLDoihdlr0N').hexdigest()[:16].encode() parts=[] with open('/mnt/data/traffic_hunt.pcapng','rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue except: continue if socket.inet_ntoa(ip.src)=='10.1.243.155' and socket.inet_ntoa(ip.dst)=='10.1.33.69' and tcp.sport==52023 and tcp.dport==8080: parts.append((tcp.seq,bytes(tcp.data))) parts=sorted([(s,d) for s,d in parts if d], key=lambda x:x[0]) out=bytearray(); cur=None for seq,data in parts: if cur is None: out.extend(data); cur=seq+len(data) elif seq>=cur: out.extend(data); cur=seq+len(data) else: ov=cur-seq if ov<len(data): out.extend(data[ov:]); cur=seq+len(data) raw=bytes(out)

sequential parse

pos=0; reqs=[] while True: m=re.search(br'POST /favicondemo.ico HTTP/1.1\r\n', raw[pos:]) if not m: break start=pos+m.start() hdr_end=raw.find(b'\r\n\r\n', start) hdr=raw[start:hdr_end].decode('latin1','ignore') clm=re.search(r'Content-Length: (\d+)', hdr) if not clm: break cl=int(clm.group(1)) body_start=hdr_end+4 body=raw[body_start:body_start+cl] reqs.append(body) pos=body_start+cl print('parsed',len(reqs)) os.makedirs('/tmp/reqs', exist_ok=True) for idx,body in enumerate(reqs,1): try: ct=b64decode(body) except Exception: print('bad b64',idx,len(body),body[:40]); continue if len(ct)%16!=0: print('unaligned',idx,len(ct),len(body)); continue pt=AES.new(key,AES.MODE_ECB).decrypt(ct) pad=pt[-1] if 0<pad<=16: pt=pt[:-pad] open(f'/tmp/reqs/r{idx:02d}.bin','wb').write(pt) print('wrote') for idx in [1,2,3,4,5,6,7,8,9,10,28,29,30,31,32,33,34]: fn=f'/tmp/reqs/r{idx:02d}.bin' if not os.path.exists(fn): continue print('\nREQ',idx,'size',os.path.getsize(fn)) out=subprocess.check_output(['strings','-a',fn]).decode('latin1','ignore').splitlines()

keep moderately informative lines

keep=[] for x in out: if len(x)>=3 and x not in keep: keep.append(x) print('\n'.join(keep[:50])) PY

Inspecting request range and addressing output issue

It looks like the output shows requests starting from 10 onward, but I need to focus on requests 62-67. There were only 34 requests initially, but I may have missed some due to large output being ellipsized after REQ10. I’ll review later requests individually and use string-searching techniques to find specific patterns, starting with request 62.

bash -lc python - <<'PY' import os, subprocess for idx in [60,61,62,63,64,65,66,67]: fn=f'/tmp/reqs/r{idx:02d}.bin' if not os.path.exists(fn): print('missing',idx); continue print('\nREQ',idx,'size',os.path.getsize(fn)) out=subprocess.check_output(['strings','-a',fn]).decode('latin1','ignore').splitlines() keep=[] for x in out: if len(x)>=3 and x not in keep: keep.append(x)

show first 80 and lines containing slash/out/tmp/mode/hash etc

for x in keep[:60]: print(x) PY

bash -lc ls -lah /mnt/data | sed -n '1,120p'

Rebuilding decryption script and inspecting traffic

I'm considering recreating some scripts to inspect the pcap using tshark or Python. I’ll focus on TCP streams between 10.1.243.155 and 10.1.33.69. I need to check the requests after req67 and parse HTTP POST responses, specifically for "/favicondemo.ico". Tshark or pyshark might be an easier solution, so I'll check them out.

bash -lc which tshark || true; which tcpflow || true; which ngrep || true; python -V

Using Python and dpkt for decryption

I’ve decided to go with Python and dpkt for faster processing instead of tshark. I'll write a script to parse the flows, reconstruct the file, and decrypt the Behinder C2. The key should be derived from md5("HWmc..."), using the first 16 bytes as ASCII. I also want to search for /favicondemo.ico in port streams and check for requests involving "out" executions.

bash -lc cat > /tmp/analyze_behinder.py <<'PY' import dpkt, socket, base64, json, re, sys from Crypto.Cipher import AES from hashlib import md5 from collections import defaultdict

PCAP='/mnt/data/traffic_hunt.pcapng' key=b'1f2c8075acd3d118' server_ip='10.1.33.69' server_port=8080 client_ip='10.1.243.155'

def unpad(x): if not x: return x p=x[-1] if 1 <= p <= 16 and x.endswith(bytes([p])*p): return x[:-p] return x

def aes_dec(data): return unpad(AES.new(key, AES.MODE_ECB).decrypt(data))

def try_b64_dec(s): try: return base64.b64decode(s) except Exception: return None

reassemble per direction by 4-tuple

streams=defaultdict(lambda: { 'c2s': {}, 's2c': {} }) with open(PCAP,'rb') as f: pcap=dpkt.pcapng.Reader(f) for ts, buf in pcap: try: eth=dpkt.ethernet.Ethernet(buf) ip=eth.data if not isinstance(ip, dpkt.ip.IP): continue tcp=ip.data if not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client_ip,server_ip}: continue if tcp.sport!=server_port and tcp.dport!=server_port: continue if sip==client_ip and dip==server_ip: sport=tcp.sport streams[sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server_ip and dip==client_ip: dport=tcp.dport streams[dport]['s2c'][tcp.seq]=bytes(tcp.data) except Exception: pass

def join_segments(segdict): # naive ordered concat, skipping overlaps out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out += data; cur = seq + len(data); continue if seq >= cur: out += data; cur = seq + len(data) else: overlap = cur - seq if overlap < len(data): out += data[overlap:]; cur += len(data)-overlap return out

def parse_http_messages(blob, is_request): msgs=[]; pos=0 while pos < len(blob): try: if is_request: req=dpkt.http.Request(blob[pos:]) body=req.body consumed=len(req) msgs.append(req) else: resp=dpkt.http.Response(blob[pos:]) body=resp.body consumed=len(resp) msgs.append(resp) pos += consumed except (dpkt.dpkt.NeedData, dpkt.dpkt.UnpackError): # try to find next HTTP marker m = re.search(br'(GET|POST|HTTP/1.[01]) ', blob[pos+1:]) if not m: break pos = pos + 1 + m.start() return msgs

for sport in sorted(streams): cblob=join_segments(streams[sport]['c2s']) sblob=join_segments(streams[sport]['s2c']) if b'/favicondemo.ico' not in cblob: continue print('\n=== STREAM', sport, '===') reqs=parse_http_messages(cblob, True) resps=parse_http_messages(sblob, False) print('reqs', len(reqs), 'resps', len(resps)) # pair roughly by order for only /favicondemo.ico posts rix=0 for i,req in enumerate(reqs,1): uri=getattr(req,'uri','') if uri != '/favicondemo.ico': continue body=req.body info='' text='' dec=None # decrypt request body if possible try: raw=base64.b64decode(body) dec=aes_dec(raw) # printable snippet text=dec.decode('latin1','ignore') m=re.search(r'(cd [^;\n]+;[^\n\r]{0,120}|mode=(?:list|update|check)\s+path=[^\s\x00]+(?:\s+hash=[0-9a-f]+)?)', text) if m: info=m.group(1) elif '/var/tmp/out' in text: info='contains /var/tmp/out' else: info=text[:120].replace('\n','\n') except Exception as e: info='(decrypt-fail)' print(f'REQ {i}: {info}') # matching response by order among favicondemo resp? we'll just take next response index order. # responses for j,resp in enumerate(resps,1): body=resp.body if not body: continue try: dec=aes_dec(body) s=dec.decode('latin1','ignore') if s.startswith('{'): obj=json.loads(s) msg=base64.b64decode(obj.get('msg','')).decode('latin1','ignore') if obj.get('msg') else '' status=base64.b64decode(obj.get('status','')).decode('latin1','ignore') if obj.get('status') else '' short=msg[:200].replace('\n','\n') print(f'RESP {j}: status={status} msg={short}') else: print(f'RESP {j}: {s[:200]!r}') except Exception: pass PY python /tmp/analyze_behinder.py | sed -n '1,260p'

Improving parsing with custom script

The current parsing isn't working well because of HTTP issues like chunked encoding or large body sizes. To fix this, I'll search for raw cblob markers like 'POST /favicondemo.ico'. I'll craft a script to manually split requests by headers and content-length, targeting stream 52023 for better parsing.

bash -lc python - <<'PY' import dpkt, socket, re from collections import defaultdict PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out c=join(streams[52023]['c2s']); s=join(streams[52023]['s2c']) print('c len', len(c), 's len', len(s)) for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1\r\n', c): print('post at', m.start()) for m in re.finditer(br'HTTP/1.1 200 OK\r\n', s): print('resp at', m.start()) print('first 200 bytes around first post:') start=re.search(br'POST /favicondemo.ico HTTP/1.1\r\n', c).start() print(c[start:start+300]) PY

Parsing requests by position

I need to extract and parse each request based on its content-length to view the decrypted data or command. I'll also need to parse the responses the same way. I'll write a script that processes the request headers and body at each offset in the list to ensure proper extraction.

bash -lc python - <<'PY' import dpkt,socket,re,base64,json from collections import defaultdict from Crypto.Cipher import AES from hashlib import md5 PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 key=b'1f2c8075acd3d118' def unpad(x): p=x[-1] return x[:-p] if 1<=p<=16 and x.endswith(bytes([p])*p) else x

def dec(data): return unpad(AES.new(key,AES.MODE_ECB).decrypt(data)) streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out

c=join(streams[52023]['c2s']); s=join(streams[52023]['s2c'])

parse requests

req_pos=[m.start() for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1\r\n', c)] resp_pos=[m.start() for m in re.finditer(br'HTTP/1.1 200 OK\r\n', s)] print('requests', len(req_pos), 'responses', len(resp_pos))

show last 20 requests snippets and all after 60

for idx,pos in enumerate(req_pos,1): hdr_end=c.find(b'\r\n\r\n', pos) hdr=c[pos:hdr_end].decode('latin1','ignore') m=re.search(r'Content-Length: (\d+)', hdr, re.I) clen=int(m.group(1)) if m else 0 body=c[hdr_end+4:hdr_end+4+clen] try: plain=dec(base64.b64decode(body)) txt=plain.decode('latin1','ignore') except Exception as e: txt='[decrypt fail]' if idx>=60: print('\nREQ',idx,'len',clen) print(txt[:500].replace('\x00',' ')) # if large, maybe search for interesting cmds for pat in ['cmd','/var/tmp/out','chmod','pwd','ls -la','./out','flag','dart{','cat ','check path','mode=list']: if pat in txt: print('contains',pat)

parse last responses

for idx,pos in enumerate(resp_pos,1): hdr_end=s.find(b'\r\n\r\n', pos) hdr=s[pos:hdr_end].decode('latin1','ignore') m=re.search(r'Content-Length: (\d+)', hdr, re.I) clen=int(m.group(1)) if m else 0 body=s[hdr_end+4:hdr_end+4+clen] if idx>=60: print('\nRESP', idx, 'len', clen) try: plain=dec(body) t=plain.decode('latin1','ignore') print(t[:500].replace('\x00',' ')) if t.startswith('{'): obj=json.loads(t) for k,v in obj.items(): try: dv=base64.b64decode(v).decode('latin1','ignore') print(' ',k,':',dv[:300]) except: pass except Exception as e: print('dec fail',e) PY

Improving response parsing

It looks like the response parsing failed possibly due to chunking, and I may have missed headers. I should search for HTTP/1.1 200 responses, maybe in lowercase, as the previous responses have been identified. I'll focus on decrypted request bodies and look for any useful flags or commands across other streams and ports. Time to script the parsing of all request bodies.

bash -lc python - <<'PY' import dpkt,socket,re,base64 from collections import defaultdict from Crypto.Cipher import AES PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 key=b'1f2c8075acd3d118' def unpad(x): p=x[-1] return x[:-p] if 1<=p<=16 and x.endswith(bytes([p])*p) else x

def dec(data): return unpad(AES.new(key,AES.MODE_ECB).decrypt(data)) streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out

patterns=['./out',' /var/tmp/out','chmod +x out','/tmp/out','flag','dart{','cat ','strings ','base64 ','md5','sha','wget','curl','nc ','bash -c','./','/flag','find / -name','grep -R','python','perl','java -jar','file out','uname -a'] for sport in sorted(streams): c=join(streams[sport]['c2s']) poss=[m.start() for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1\r\n', c)] hits=[] for idx,pos in enumerate(poss,1): hdr_end=c.find(b'\r\n\r\n', pos) if hdr_end<0: continue m=re.search(br'Content-Length: (\d+)', c[pos:hdr_end], re.I) if not m: continue clen=int(m.group(1)) body=c[hdr_end+4:hdr_end+4+clen] try: txt=dec(base64.b64decode(body)).decode('latin1','ignore') except: continue for pat in patterns: if pat in txt: hits.append((idx,pat,txt[:400].replace('\n','\n'))) break if hits: print('\nSTREAM',sport,'hits',len(hits)) for idx,pat,snip in hits[:20]: print(' req',idx,'pat',pat,'snip',snip) PY

bash -lc python - <<'PY' import dpkt,socket,re,base64,json from collections import defaultdict from Crypto.Cipher import AES PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 key=b'1f2c8075acd3d118' def unpad(x): if not x: return x p=x[-1] return x[:-p] if 1<=p<=16 and x.endswith(bytes([p])*p) else x

def dec(data): return unpad(AES.new(key,AES.MODE_ECB).decrypt(data)) streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out for sport in [52036,52037,52038,52039,52040,52041,52042,52043,52044,52045,52046,52047,52048,52049]: if sport not in streams: continue c=join(streams[sport]['c2s']); s=join(streams[sport]['s2c']) if not c and not s: continue print('\nSTREAM',sport,'clen',len(c),'slen',len(s))

print request cmds

for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1\r\n', c): pos=m.start(); hdr_end=c.find(b'\r\n\r\n', pos) if hdr_end<0: continue m2=re.search(br'Content-Length: (\d+)', c[pos:hdr_end], re.I) clen=int(m2.group(1)) if m2 else 0 body=c[hdr_end+4:hdr_end+4+clen] try: txt=dec(base64.b64decode(body)).decode('latin1','ignore') # print command snippet or paths cmdm=re.search(r'cd [^\x00]{0,120}', txt) if cmdm: print('REQTXT', cmdm.group(0)) elif './out' in txt or '/var/tmp/out' in txt or 'flag' in txt: print('REQTXT', txt[:300]) except Exception as e: pass

response parse markers

for hm in re.finditer(br'HTTP/1.1 200 OK\r\n', s): pos=hm.start(); hdr_end=s.find(b'\r\n\r\n', pos) if hdr_end<0: continue hdr=s[pos:hdr_end].decode('latin1','ignore') m2=re.search(r'Content-Length: (\d+)', hdr, re.I) clen=int(m2.group(1)) if m2 else 0 body=s[hdr_end+4:hdr_end+4+clen] print('resp header',hdr.replace('\r\n',' | '), 'bodylen',len(body)) try: plain=dec(body) print(' dec1', plain[:120]) try: t=plain.decode('latin1','ignore') print(' txt', t[:300]) if t.startswith('{'): obj=json.loads(t) for k,v in obj.items(): try: print(' ',k, base64.b64decode(v)[:300]) except: pass except Exception as e: pass except Exception as e: print(' dec fail',e) PY

Investigating response parsing issue

It seems the response markers weren't found, possibly because the headers got merged or are missing. I'll take a closer look at the raw start of the response to identify any potential issues with the headers or formatting. This will help me figure out what went wrong with the parsing.

bash -lc python - <<'PY' import dpkt,socket from collections import defaultdict PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out for sport in [52036,52039,52041]: s=join(streams[sport]['s2c']) print('STREAM',sport,'raw repr',repr(s[:300])) PY

Exploring potential issues with request streams

Hmm, the responses seem to return 302s, possibly because the request failed. Could these later streams be related to a "rememberMe" brute force from the browser? Maybe the requests aren't actually for "/favicondemo.ico" and we're seeing a false positive. I'll inspect the headers in stream 52036 to clarify.

bash -lc python - <<'PY' import dpkt,socket,re from collections import defaultdict PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out for sport in [52036,52037,52038,52039,52040,52041,52042,52043,52044,52045,52046,52047,52048,52049]: if sport not in streams: continue c=join(streams[sport]['c2s']) print('\nSTREAM',sport) print(repr(c[:400])) PY

Refining stream analysis for correct pattern identification

It looks like I mistakenly scanned all C blob data for patterns inside GET request bodies, which led to decrypting random, irrelevant payloads. I need to focus on isolating only the "/favicondemo" streams, particularly POST requests. It seems like stream 52036 is not actually related, so let's search for all streams containing "/favicondemo.ico".

bash -lc python - <<'PY' import dpkt,socket,re from collections import defaultdict PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out for sport in sorted(streams): c=join(streams[sport]['c2s']) if b'/favicondemo.ico' in c: print(sport, c.count(b'/favicondemo.ico')) PY

Focusing on relevant streams for command parsing

I need to focus on parsing the commands in the "/favicondemo" streams after 52023, especially from streams 52027-52036. I'll isolate real POST requests by splitting at POST markers and then extract last requests for command or file operations, looking for strings like "cat /proc" or "read /tmp/out". This will help generate a summary of each stream's request types.

bash -lc python - <<'PY' import dpkt,socket,re,base64 from collections import defaultdict from Crypto.Cipher import AES PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 key=b'1f2c8075acd3d118' def unpad(x): if not x: return x p=x[-1] return x[:-p] if 1<=p<=16 and x.endswith(bytes([p])*p) else x

def dec(data): return unpad(AES.new(key,AES.MODE_ECB).decrypt(data)) streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out

def parse_posts(c): posts=[] poss=[m.start() for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1\r\n', c)] for pos in poss: hdr_end=c.find(b'\r\n\r\n', pos) if hdr_end<0: continue hdr=c[pos:hdr_end] m=re.search(br'Content-Length: (\d+)', hdr, re.I) if not m: continue clen=int(m.group(1)) body=c[hdr_end+4:hdr_end+4+clen] posts.append(body) return posts for sport in [52023,52027,52028,52029,52030,52031,52032,52033,52034,52035,52036,57066]: c=join(streams[sport]['c2s']) posts=parse_posts(c) print('\nSTREAM',sport,'posts',len(posts)) interesting=[] for i,body in enumerate(posts,1): try: txt=dec(base64.b64decode(body)).decode('latin1','ignore') except Exception: continue found=None for pat in ['chmod +x out','./out','/var/tmp/ ;','/var/tmp/out',' /tmp/','mode=list','mode=check','mode=update','whoami','ps -ef','pwd','ls -la','BasicInfo','Echo']: if pat in txt: found=pat; break if found: interesting.append((i,found,txt[:220].replace('\n','\n'))) for item in interesting[:12]: print(item) if len(interesting)>12: print('... total',len(interesting)) PY

Investigating response for stream 52036

It looks like there's a possible issue with stream 52036, where there may have been a response parse issue due to 302. The stream has many GET requests, followed by a POST. I need to locate the last HTTP header markers and search around the end of the stream to identify a potential 200 response after the 302s.

bash -lc python - <<'PY' import dpkt,socket,re from collections import defaultdict PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out s=join(streams[52036]['s2c']) for m in re.finditer(br'HTTP/1.1 ', s): pos=m.start() print('at',pos,repr(s[pos:pos+120])) print('tail',repr(s[-300:])) PY

Evaluating response for stream 52036

It looks like stream 52036 didn’t return a 200 response, and I’m wondering if the command request got a 302 due to an invalid session. Maybe I need to check a later stream with the same session or see if the request was missing a JSESSIONID. I’ll inspect the headers to figure this out.

bash -lc python - <<'PY' import dpkt,socket,re from collections import defaultdict PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out c=join(streams[52036]['c2s']) for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1\r\n', c): pos=m.start(); hdr_end=c.find(b'\r\n\r\n', pos) print(c[pos:hdr_end].decode('latin1','ignore')) PY

Re-evaluating stream 52036 and exploring possibilities

It seems like there’s a possibility stream 52036 isn’t useful. The stream had a mix of GETs and a POST, but only 302 responses were found. The command might have been executed outside of HTTP or through another channel, like a reverse shell. I’ll search for an aes-key across the entire record.

bash -lc grep -aob 'IhbJfHI98nuSvs5JweD5qsNvSQ/HHcE/SNLyEBU9Phs=' /mnt/data/traffic_hunt.pcapng | sed -n '1,20p'

Reconstructing binary and analyzing encrypted content

I’ll attempt to reconstruct the uploaded binary and analyze it. Since the command uses an AES key, it likely decrypts something, perhaps a local file. I'll parse the request's constant pool for path/content values, and examine if any strings or raw data are embedded, potentially base64-encoded, within the FileOperation update.

Parsing Java class file to extract chunk data

I can parse the Java class file's constant pool for UTF8 entries, looking for strings like update, /var/tmp/out, and block data. The chunk data might be base64 encoded if transmitted in binary. I’ll build a minimal parser to extract all strings, then try reconstructing and decoding the chunks.

bash -lc cat > /tmp/class_strings.py <<'PY' import struct, sys

def parse_utf8s(data): if data[:4] != b'\xca\xfe\xba\xbe': raise ValueError('not class') cp_count = struct.unpack('>H', data[8:10])[0] i=10 idx=1 out=[] while idx < cp_count: tag = data[i]; i+=1 if tag == 1: # Utf8 ln = struct.unpack('>H', data[i:i+2])[0]; i+=2 bs=data[i:i+ln]; i+=ln try: s=bs.decode('utf-8') except: s=bs.decode('latin1','ignore') out.append((idx,s)) elif tag in (3,4): i+=4 elif tag in (5,6): i+=8; idx+=1 elif tag in (7,8,16,19,20): i+=2 elif tag in (9,10,11,12,18): i+=4 elif tag == 15: i+=3 else: raise ValueError(f'unknown tag {tag} at idx {idx} off {i-1}') idx+=1 return out if name=='main': d=open(sys.argv[1],'rb').read() for idx,s in parse_utf8s(d): print(idx,repr(s)[:500]) PY

Reconstructing and parsing update request

I’ll reconstruct one update request body to file and parse the strings. Then, I can write a script to extract the decoded class bytes and start working with them. This should allow me to identify and process the relevant data in a more structured way for analysis.

bash -lc python - <<'PY' import dpkt,socket,re,base64 from collections import defaultdict from Crypto.Cipher import AES PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 key=b'1f2c8075acd3d118' def unpad(x): if not x: return x p=x[-1] return x[:-p] if 1<=p<=16 and x.endswith(bytes([p])*p) else x

def dec(data): return unpad(AES.new(key,AES.MODE_ECB).decrypt(data)) streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out c=join(streams[52023]['c2s']) poss=[m.start() for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1\r\n', c)] for req_no in [10,62,63,64,65,66,67]: pos=poss[req_no-1] hdr_end=c.find(b'\r\n\r\n', pos) m=re.search(br'Content-Length: (\d+)', c[pos:hdr_end], re.I) clen=int(m.group(1)) body=c[hdr_end+4:hdr_end+4+clen] plain=dec(base64.b64decode(body)) open(f'/tmp/req{req_no}.class','wb').write(plain) print('wrote',req_no,len(plain)) PY python /tmp/class_strings.py /tmp/req10.class | sed -n '1,120p'

Reconstructing update chunks and parsing data

The content is a long base64-like string, so I need to reconstruct all update chunks and potentially decode them. I'll focus on extracting fields such as mode, path, content, and blocksize, and check if blockIndex is omitted. I’ll parse the constant pool for long strings like chunk names and inspect the sequence in requests to determine chunk order. Some values might help identify the order as I parse these requests.

Inspecting blockIndex and chunk order

It looks like a string '2' might represent the blockIndex, along with other fields like the mode ('update'), path ('/var/tmp/out'), and blockSize ('30720'). The chunk numbers could be increasing across requests 10 to 61, maybe from 2 to 53. I'll need to inspect request 11 for numeric strings and also create a script to list string lengths, focusing on the numbers.

bash -lc python - <<'PY' import re, struct from pathlib import Path

def parse_utf8s(data): cp_count=struct.unpack('>H',data[8:10])[0]; i=10; idx=1; out=[] while idx<cp_count: tag=data[i]; i+=1 if tag==1: ln=struct.unpack('>H',data[i:i+2])[0]; i+=2 bs=data[i:i+ln]; i+=ln out.append(bs.decode('utf-8','ignore')) elif tag in (3,4): i+=4 elif tag in (5,6): i+=8; idx+=1 elif tag in (7,8,16,19,20): i+=2 elif tag in (9,10,11,12,18): i+=4 elif tag==15: i+=3 else: raise ValueError(tag) idx+=1 return out for req in [10,11,12,60,61,62,63]: data=Path(f'/tmp/req{req}.class').read_bytes() if Path(f'/tmp/req{req}.class').exists() else None if not data: continue strs=parse_utf8s(data) nums=[s for s in strs if re.fullmatch(r'[0-9]+',s)] longs=[s for s in strs if len(s)>100] print('REQ',req,'nums',nums[:10],'longcount',len(longs),'long0len',len(longs[0]) if longs else None) PY

Reconstructing binary chunks from update requests

I'll save the update request files, even though it might not be strictly necessary. The main task is reconstructing the chunks by parsing the UTF-8 strings from the plaintext bodies. The long string (40960) is likely base64 encoded and corresponds to a 30720-byte binary chunk. I'll need to check requests 10-61, though 52 chunks may suffice for reconstructing the 10.79MB file. I'll carefully check and save each request to determine the chunk order and verify the presence of blockIndex starting from 2 in request 10.

bash -lc python - <<'PY' import dpkt,socket,re,base64,struct from collections import defaultdict from Crypto.Cipher import AES from pathlib import Path PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 key=b'1f2c8075acd3d118' def unpad(x): if not x: return x p=x[-1] return x[:-p] if 1<=p<=16 and x.endswith(bytes([p])*p) else x

def dec(data): return unpad(AES.new(key,AES.MODE_ECB).decrypt(data)) def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass c=join(streams[52023]['c2s']) poss=[m.start() for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1\r\n', c)] for req_no in [11,12,13,14,15,16]: pos=poss[req_no-1] hdr_end=c.find(b'\r\n\r\n', pos) m=re.search(br'Content-Length: (\d+)', c[pos:hdr_end], re.I) body=c[hdr_end+4:hdr_end+4+int(m.group(1))] plain=dec(base64.b64decode(body)) Path(f'/tmp/req{req_no}.class').write_bytes(plain)

parse utf8s

cp=struct.unpack('>H',plain[8:10])[0]; i=10; idx=1; arr=[] while idx<cp: tag=plain[i]; i+=1 if tag==1: ln=struct.unpack('>H',plain[i:i+2])[0]; i+=2 arr.append(plain[i:i+ln].decode('utf-8','ignore')); i+=ln elif tag in (3,4): i+=4 elif tag in (5,6): i+=8; idx+=1 elif tag in (7,8,16,19,20): i+=2 elif tag in (9,10,11,12,18): i+=4 elif tag==15: i+=3 idx+=1 nums=[s for s in arr if re.fullmatch(r'[0-9]+',s)] longs=[s for s in arr if len(s)>100] print('req',req_no,'nums',nums[:5],'firstlonglen',len(longs[0]) if longs else None) PY

Investigating blockIndex sequencing and chunk order

It looks like the blockIndex jumps across updates, which might suggest that the order isn't necessarily sorted by request number. Perhaps the class generator applies some randomization, which would cause numeric string order to not directly reflect the blockIndex. I'll check the field assignments from the bytecode. It's also possible that the numeric strings represent chunk indexes, but sorting issues and parallel uploads might cause out-of-order chunk arrivals. I'll gather all the blockIndex values across the update requests to figure out the correct indices and count duplicates.

bash -lc python - <<'PY' import dpkt,socket,re,base64,struct,collections from collections import defaultdict from Crypto.Cipher import AES PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 key=b'1f2c8075acd3d118' def unpad(x): if not x: return x p=x[-1] return x[:-p] if 1<=p<=16 and x.endswith(bytes([p])*p) else x

def dec(data): return unpad(AES.new(key,AES.MODE_ECB).decrypt(data)) def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out

def parse_utf8s(plain): cp=struct.unpack('>H',plain[8:10])[0]; i=10; idx=1; arr=[] while idx<cp: tag=plain[i]; i+=1 if tag==1: ln=struct.unpack('>H',plain[i:i+2])[0]; i+=2 arr.append(plain[i:i+ln].decode('utf-8','ignore')); i+=ln elif tag in (3,4): i+=4 elif tag in (5,6): i+=8; idx+=1 elif tag in (7,8,16,19,20): i+=2 elif tag in (9,10,11,12,18): i+=4 elif tag==15: i+=3 else: break idx+=1 return arr streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass all_idx=[] for sport in sorted(streams): c=join(streams[sport]['c2s']) poss=[m.start() for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1\r\n', c)] for ri,pos in enumerate(poss,1): hdr_end=c.find(b'\r\n\r\n', pos) if hdr_end<0: continue m=re.search(br'Content-Length: (\d+)', c[pos:hdr_end], re.I) if not m: continue body=c[hdr_end+4:hdr_end+4+int(m.group(1))] try: plain=dec(base64.b64decode(body)) strs=parse_utf8s(plain) except Exception: continue if 'update' in strs and '/var/tmp/out' in strs: nums=[int(s) for s in strs if re.fullmatch(r'[0-9]+',s)] longs=[s for s in strs if len(s)>100] all_idx.append((sport,ri,nums,longs[0] if longs else None)) print('total update reqs',len(all_idx))

print sample first 20 indices

for x in all_idx[:30]: sport,ri,nums,long=x print(sport,ri,nums[:3], 'longlen', len(long) if long else None)

collect candidate indices excluding 30720

vals=[] for sport,ri,nums,long in all_idx: cand=[n for n in nums if n!=30720] vals.extend(cand) print('unique candidate nums',len(set(vals)), 'min',min(vals), 'max',max(vals)) print('first 120 sorted unique',sorted(set(vals))[:120]) print('missing in range?') uniq=sorted(set(vals)) miss=[n for n in range(min(uniq), max(uniq)+1) if n not in uniq] print('miss count',len(miss),'first miss',miss[:50]) PY

Reconstructing file from chunks

Great! We can reconstruct the file from 352 chunks, numbers 0 to 351. This involves decoding a long string that may be base64. The 351 complete chunks are 30,720 bytes each, with the last chunk potentially being smaller. Given the total size of 10,790,868 bytes, we'll split it into 352 chunks and analyze the results.

bash -lc python - <<'PY' import dpkt,socket,re,base64,struct,hashlib,os from collections import defaultdict from Crypto.Cipher import AES PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 key=b'1f2c8075acd3d118' def unpad(x): if not x: return x p=x[-1] return x[:-p] if 1<=p<=16 and x.endswith(bytes([p])*p) else x

def dec(data): return unpad(AES.new(key,AES.MODE_ECB).decrypt(data)) def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out

def parse_utf8s(plain): cp=struct.unpack('>H',plain[8:10])[0]; i=10; idx=1; arr=[] while idx<cp: tag=plain[i]; i+=1 if tag==1: ln=struct.unpack('>H',plain[i:i+2])[0]; i+=2 arr.append(plain[i:i+ln].decode('utf-8','ignore')); i+=ln elif tag in (3,4): i+=4 elif tag in (5,6): i+=8; idx+=1 elif tag in (7,8,16,19,20): i+=2 elif tag in (9,10,11,12,18): i+=4 elif tag==15: i+=3 else: raise ValueError(f'tag {tag}') idx+=1 return arr streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.dport and tcp.seq]=bytes(tcp.data) except: pass chunks={} for sport in sorted(streams): c=join(streams[sport]['c2s']) poss=[m.start() for m in re.finditer(br'POST /favicondemo.ico HTTP/1.1\r\n', c)] for pos in poss: hdr_end=c.find(b'\r\n\r\n', pos) if hdr_end<0: continue m=re.search(br'Content-Length: (\d+)', c[pos:hdr_end], re.I) if not m: continue body=c[hdr_end+4:hdr_end+4+int(m.group(1))] try: plain=dec(base64.b64decode(body)) strs=parse_utf8s(plain) except Exception: continue if 'update' in strs and '/var/tmp/out' in strs: nums=[int(s) for s in strs if re.fullmatch(r'[0-9]+',s)] idx=[n for n in nums if n!=30720] if len(idx)!=1: continue idx=idx[0] longs=[s for s in strs if len(s)>100] if len(longs)!=1: continue chunks[idx]=base64.b64decode(longs[0]) print('chunks',len(chunks), 'min',min(chunks), 'max',max(chunks)) missing=[i for i in range(max(chunks)+1) if i not in chunks] print('missing',missing[:10], 'count',len(missing)) out=b''.join(chunks[i] for i in range(max(chunks)+1)) open('/mnt/data/out_reconstructed','wb').write(out) print('size',len(out),'md5',hashlib.md5(out).hexdigest()) print('sha256',hashlib.sha256(out).hexdigest()) PY file /mnt/data/out_reconstructed ls -lh /mnt/data/out_reconstructed md5sum /mnt/data/out_reconstructed strings -a -n 8 /mnt/data/out_reconstructed | grep -ao 'dart{[^}]*}' | head

Analyzing ELF for key/flag

Great! I need to analyze the ELF file. I'll start by searching for strings related to "aes-key" or "flag" within it. This should help find either encryption keys or potential flags embedded in the file. Let’s see what comes up in the analysis.

bash -lc strings -a -n 6 /mnt/data/out_reconstructed | egrep -i 'dart{|flag{|aes|key|secret|usage|--aes|tmp|flag' | sed -n '1,200p'

Running and inspecting ELF binary

I might need to run the static ELF binary. Since the user asked for a flag from the pcap, running the reconstructed binary seems like a reasonable step. I could use the --help argument to inspect available options, or run it without arguments in a safe container if needed.

bash -lc chmod +x /mnt/data/out_reconstructed && /mnt/data/out_reconstructed --help 2>&1 | head -n 50

Running with AES key from request

I think I can run the script using Python's argparse module, which will allow me to pass the AES key from the request. This approach seems like a good way to handle the request’s parameters and execute the necessary operation with the provided key.

bash -lc /mnt/data/out_reconstructed --aes-key 'IhbJfHI98nuSvs5JweD5qsNvSQ/HHcE/SNLyEBU9Phs=' 2>&1 | head -c 1000 | xxd -g 1 | sed -n '1,80p'

Running the binary and checking output

Since xxd is unavailable, I’ll just run the binary directly and inspect the standard output. This will help me understand how the binary behaves and determine if it needs further analysis or adjustments from the results. It feels like a straightforward approach to troubleshoot.

bash -lc /mnt/data/out_reconstructed --aes-key 'IhbJfHI98nuSvs5JweD5qsNvSQ/HHcE/SNLyEBU9Phs=' > /tmp/out_run.bin 2>/tmp/out_run.err; echo RC:$?; ls -lh /tmp/out_run.bin /tmp/out_run.err; file /tmp/out_run.bin /tmp/out_run.err; head -c 200 /tmp/out_run.bin | od -An -t x1 | sed -n '1,20p'; echo 'stderr:'; sed -n '1,50p' /tmp/out_run.err

Inspecting binary for details

I need to analyze the binary further since it might be a PyInstaller package with embedded Python. I'll use string inspection to look for network information like domains, IP addresses, or paths. This should give me clues about its behavior and how it interacts with external systems.

bash -lc strings -a -n 5 /mnt/data/out_reconstructed | egrep -i 'connect|socket|http|https|tcp|udp|[0-9]+.[0-9]+.[0-9]+.[0-9]+|implant.py|aes|key|port|server|host|tmp|flag|dart|/tmp|/var/tmp|recv|send' | sed -n '1,240p'

Checking for UPX unpacking tool

I’ll check if I can use a UPX unpacking tool. It might help me extract the original binary if it’s been packed with UPX. I’ll need to verify if the tool is available and see if it's a viable solution for this execution. Let's confirm.

bash -lc which upx || true; which binwalk || true; which pyinstxtractor.py || true; which pycdc || true; which objdump || true; which readelf || true; which strings || true

Looking into unpacking alternatives for PyInstaller

I see that UPX isn’t available. I could try installing it via apt, though that might not work. Maybe I could use a Python library instead? It might be worth manually unpacking the file or using pyinstxtractor if it's not sectioned. I’ll check if PyInstaller is installed by running python -c import PyInstaller.

bash -lc python - <<'PY' import importlib.util for m in ['PyInstaller','pefile','uncompyle6','decompyle3']: print(m, importlib.util.find_spec(m)) PY

Considering unpacking and extraction options

I have lots of tokens, but I need to get the embedded Python script. Using tools like pyinstxtractor or UPX extraction might help, but I can’t access the internet. I could try extracting PyInstaller manually or maybe use Python's lief library using pip if possible. If not, I’ll check if strace is already installed to intercept the binary.

bash -lc which strace || true; which ltrace || true; which gdb || true

Exploring alternatives for reverse engineering

There's no strace, so I’m considering tracing the connect system call and using LD_PRELOAD, but it's tricky with a static binary. I could try to set up a dummy server, but I'd need to know the host/port, which might be hidden in encrypted config. Maybe using objdump or gdb could help inspect the binary.

bash -lc readelf -h /mnt/data/out_reconstructed; echo '---'; readelf -l /mnt/data/out_reconstructed | sed -n '1,200p'; echo '---'; objdump -x /mnt/data/out_reconstructed | sed -n '1,120p'

Looking at unpacking methods and runtime tracing

Alright, there’s a lot to consider, especially with the UPX compression and possible PyInstaller extraction. I could try dynamic tracing via ptrace, though it might be overkill, or maybe just extract from the pyz. The UPX unpacking is tricky, but I could try parsing or memory dumping if the binary exits too quickly.

Inspecting network connections and response decryption

It seems possible that the binary could be attempting to connect to localhost on a fixed port. I’m wondering if patching the connect function makes sense, though it's a static binary. Maybe inspecting memory-mapped PyInstaller archives after UPX unpacking could help with extraction. I also need to explore parsing encrypted server responses.

bash -lc python - <<'PY' import dpkt,socket,re from collections import defaultdict PCAP='/mnt/data/traffic_hunt.pcapng'; client='10.1.243.155'; server='10.1.33.69'; port=8080 streams=defaultdict(lambda:{'c2s':{},'s2c':{}}) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if {sip,dip}!={client,server}: continue if tcp.sport!=port and tcp.dport!=port: continue if sip==client and dip==server: streams[tcp.sport]['c2s'][tcp.seq]=bytes(tcp.data) elif sip==server and dip==client: streams[tcp.dport]['s2c'][tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out for sport in [52027,52028,52029,52030,52031,52032,52033,52034,52035]: s=join(streams[sport]['s2c']) print('\nSTREAM',sport,'len',len(s),'countHTTP',s.count(b'HTTP/1.1')) print('start',repr(s[:200])) print('tail',repr(s[-200:])) PY

Parsing chunked responses and analyzing streams

It looks like the responses use chunked transfer encoding with a 64-byte chunk size. I’ll need to parse the chunks and concatenate them to decrypt the body. Once that’s done, I can check if there’s any execution log or result. I should focus on parsing responses to trace all streams and check for missing information like flags from any server connections.

Searching for execution results and flag

I suspect the binary might handle execution outcomes separately or through other connections, not in stream 52036. It looks like the flag could be part of the binary itself, requiring decompilation or dynamic analysis to find. I'll focus on parsing responses for any traces of key information, like config or AES key.

Exploring PyInstaller extraction options

The traceback points to implant.py, so I might be able to extract the PyInstaller archive by reconstructing a memory dump after decompressing. Alternatively, I could use or implement pyinstxtractor. I also need to confirm if UPX just packages the stub, leaving the archive intact, and search for relevant strings like MEI or pyi.

bash -lc python - <<'PY' from pathlib import Path data=Path('/mnt/data/out_reconstructed').read_bytes() for pat in [b'MEI\x0c\x0b\x0a\x0b\x0e', b'PYZ', b'pyi-windows-manifest-filename', b'pyi-runtime-tmpdir', b'PYINSTALLER', b'pyiboot01_bootstrap', b'implant.py']: idx=data.find(pat) print(pat, idx) PY

Extracting and inspecting PyInstaller archive

The archive seems intact, so I can extract the PyInstaller archive directly. I'll need to implement a pyinstxtractor-like parser. Since the MEI cookie is found near the end, I'll inspect it further. The cookie structure might be 24 bytes, specifically !8sIIII64s for PyInstaller v2.1+. I’ll check the last 200 bytes in hex using Python.

bash -lc python - <<'PY' from pathlib import Path import struct,binascii p=Path('/mnt/data/out_reconstructed').read_bytes() idx=p.find(b'MEI\x0c\x0b\x0a\x0b\x0e') print('cookie idx',idx,'filelen',len(p),'trailer',len(p)-idx) print(binascii.hexlify(p[idx:idx+120])) print(p[idx:idx+120]) PY

Inspecting cookie structure and trailer

The cookie trailer length of 2260 suggests that after the cookie, there could be a section table. I need to parse the struct, which for PyInstaller 2.1+ seems to be 8s I I I I 64s, totaling 88 bytes. The larger trailer size suggests additional data like the TOC, but since the cookie isn't at the very end, I'll use rfind to make sure I catch the last occurrence.

bash -lc python - <<'PY' from pathlib import Path p=Path('/mnt/data/out_reconstructed').read_bytes() pat=b'MEI\x0c\x0b\x0a\x0b\x0e' print('find',p.find(pat),'rfind',p.rfind(pat),'len',len(p)) PY

Reevaluating cookie position and structure

It seems like the cookie isn't at the end after all. It might be followed by binary section headers, but the file doesn't actually have section headers. The 2260 bytes after the cookie likely indicate it's located at the start of the package trailer, not the end. I’ll check the last 300 bytes in ASCII to verify this.

bash -lc python - <<'PY' from pathlib import Path p=Path('/mnt/data/out_reconstructed').read_bytes() print(repr(p[-200:])) PY

Confirming PyInstaller archive structure

I’ve pieced together the PyInstaller format: the cookie at the end is 88 bytes for versions 2.1+ and contains details like the magic number, package length, and Python version. Using some byte values, I’ve calculated the package start and Table of Contents (TOC) position — and everything checks out perfectly!

Confirming TOC entry format for parsing

I think the TOC entries follow a specific format: first the entry size, followed by position, compressed data size, uncompressed size, compression flag, data type, and file name. I remember how pyinstxtractor handles it with struct unpacking. There might be extra padding to ensure 16-byte alignment, and I’ll need to check for various data type flags like 'b' or 'o'. Let’s go ahead with writing the extractor.

bash -lc python - <<'PY' from pathlib import Path import struct, zlib, os p=Path('/mnt/data/out_reconstructed').read_bytes() magic=b'MEI\x0c\x0b\x0a\x0b\x0e' idx=p.rfind(magic) pkglen,toc_offset,toc_len,pyver = struct.unpack('!IIII', p[idx+8:idx+24]) pylib = p[idx+24:idx+88].split(b'\0',1)[0] pkg_start = len(p)-pkglen print('pkglen',pkglen,'toc_offset',toc_offset,'toc_len',toc_len,'pyver',pyver,'pylib',pylib,'pkg_start',pkg_start)

toc = p[pkg_start+toc_offset: pkg_start+toc_offset+toc_len] print('toc bytes',len(toc)) pos=0; entries=[] while pos < len(toc): entry_size = struct.unpack('!I', toc[pos:pos+4])[0] chunk = toc[pos+4: pos+entry_size] try: entryPos, cmprsdSize, uncmprsdSize, cmprsFlag, typeChar, nameLen = struct.unpack('!IIIBcH', chunk[:16]) except struct.error as e: print('struct error at pos',pos,'entry_size',entry_size,e); break name = chunk[16:16+nameLen].rstrip(b'\0').decode('utf-8','ignore') entries.append((name, entryPos, cmprsdSize, uncmprsdSize, cmprsFlag, typeChar.decode('latin1'))) pos += entry_size print('entries',len(entries)) for e in entries: print(e) PY

Checking entry size and TOC alignment

I see that the entry size at pkg_start + toc_offset seems to be 0, which could mean either a wrong byte order (endianness) or incorrect field ordering. I'll need to inspect the bytes around toc_start to confirm what's happening and figure out how to correct the layout. Let's take a closer look!

bash -lc python - <<'PY' from pathlib import Path import struct,binascii p=Path('/mnt/data/out_reconstructed').read_bytes() idx=p.rfind(b'MEI\x0c\x0b\x0a\x0b\x0e') pkglen,toc_offset,toc_len,pyver = struct.unpack('!IIII', p[idx+8:idx+24]) pkg_start=len(p)-pkglen toc_start=pkg_start+toc_offset print('toc_start',toc_start) print(binascii.hexlify(p[toc_start:toc_start+64])) print(repr(p[toc_start:toc_start+64])) PY

Reevaluating TOC and package start

The toc_start looks like it may be pointing to an ELF dynamic section. Maybe my calculation for pkg_start is off, especially since 27990 isn't page-aligned. I wonder if there's a UPX stub with overlay involved. I should check the last 88 bytes and try parsing in little-endian format for accuracy. Time to inspect this further!

bash -lc python - <<'PY' from pathlib import Path import struct p=Path('/mnt/data/out_reconstructed').read_bytes() idx=p.rfind(b'MEI\x0c\x0b\x0a\x0b\x0e') print('idx',idx,'len',len(p)) for fmt in ['!IIII','<IIII','>IIII']: vals=struct.unpack(fmt,p[idx+8:idx+24]) print(fmt,vals,'pkgstart',len(p)-vals[0]) PY

Investigating package start and potential compression

I’m thinking the pkg_start might be correct after all since the PyInstaller overlay could begin at 27990. The TOC starts at 10789468, but those bytes don’t seem to match, possibly due to toc_offset's relation to the package start in the decompressed archive. Could the package itself be compressed by UPX? I should check bytes near 27990 to see if the data starts there and look for a PYZ reference around that area. Time to investigate further!

bash -lc python - <<'PY' from pathlib import Path p=Path('/mnt/data/out_reconstructed').read_bytes() for off in [27990, 27990+100, 9782835, 10789468]: print('off',off,repr(p[off:off+80])) PY

Searching for file references near cookie

I think it might be more straightforward to search for names close to the cookie and see if I can find anything useful. Using a PyInstaller archive finder might help. I could also search for something like "implant.pyc" within the file to see if that leads me to relevant data. Let's try that!

bash -lc strings -a -n 1 /mnt/data/out_reconstructed | egrep 'implant|pyimod|pyiboot|PYZ|base_library|struct|socket' | sed -n '1,200p'

Locating TOC entries by searching strings

Alright, it looks like searching strings near the end might help locate the CArchive TOC entries. The strings -a -n 1 command found simplant, which could be important for identifying names. I’ll need to parse the TOC carefully, and check binary entries that may not be at their expected positions. Also, I should search for the base_library.zip offset.

bash -lc python - <<'PY' from pathlib import Path p=Path('/mnt/data/out_reconstructed').read_bytes() for pat in [b'implant', b'base_library.zip', b'PYZ.pyz', b'pyiboot01_bootstrap', b'pyimod01_archive', b'pyimod02_importers', b'pyimod03_ctypes']: idx=p.find(pat) print(pat, idx) PY

Parsing TOC near string offsets

It seems like I'll need to manually parse the TOC from around the string locations, particularly near 10787346, since the TOC doesn’t start at 10789468. The offsets for names around 10787346 are actually before the cookie. So, the TOC might actually start earlier, around 10787200. To confirm, I’ll inspect around one name to better infer the structure. Maybe I should check bytes around 10787300 to find the correct entry size fields.

bash -lc python - <<'PY' from pathlib import Path import binascii p=Path('/mnt/data/out_reconstructed').read_bytes() start=10787280 chunk=p[start:start+500] print('ascii snippet',chunk) print(binascii.hexlify(chunk[:120])) PY

Manually parsing TOC and entry alignment

I’m thinking of manually parsing the TOC from this position. The entry structure looks like it starts with a possible entry size of 0x0372. But when I tried interpreting it as big-endian, the result seemed nonsensical. If it's little-endian, maybe the entry size is 0x00027203, but still too large. So, I’ll try to back-calculate the offsets by looking for names, perhaps starting around the 'mstruct' name at 10787312, and ensure proper entry alignment based on familiar structures.

bash -lc python - <<'PY' from pathlib import Path p=Path('/mnt/data/out_reconstructed').read_bytes() for pat in [b'mstruct', b'pyimod01_archive', b'pyimod02_importers', b'implant', b'PYZ.pyz', b'base_library.zip']: idx=p.find(pat) print(pat, idx) print('prev64', p[idx-40:idx]) PY

Parsing entry structure and searching for alignment

We have a lot to work with and it looks like we might be able to parse the entry structure using the known format, with 4-byte entry lengths in big-endian. The bytes before the name could contain the type/length of the field. We're seeing a structure that potentially matches the one in pyinstxtractor. I'll continue with this analysis and check the bytes surrounding it for better alignment, especially looking at the entry's size.### Finding the start of the entry structure

To locate the exact start of our entry, I’m checking nearby bytes for a type of 'm' and the "mstruct" name, likely padded to multiple of 16. Before the name "mpyimod01_archive", it looks like the structure might start at 00 00 00 30. I'm analyzing the entry size, compressed size, and uncompressed size values to refine this start point. It’s almost there—just need to backtrack a bit for final alignment.### Clarifying entry structure and name parsing

I’m recalling that the entry might not have a name length field, with the name being null-padded to fit entry size minus the header. Looking at the mstruct entry, I see that the type can be followed directly by the name, which confirms that names are stored after the type. For mstruct, the name would be struct, which fits perfectly!

bash -lc python - <<'PY' print(10787296-10761478) PY

Clarifying package and entry structure

I’m thinking the package might actually start at 25818 instead of 27990, with a difference of 2172. This could be due to either excluding the ELF overlay or including the cookie. I’ll try inferring toc_start by adjusting for the cookie index and the table length, then parse entries accordingly.

bash -lc python - <<'PY' from pathlib import Path import struct, zlib, os p=Path('/mnt/data/out_reconstructed').read_bytes() magic=b'MEI\x0c\x0b\x0a\x0b\x0e' idx=p.rfind(magic) pkglen,toc_offset,toc_len,pyver = struct.unpack('!IIII', p[idx+8:idx+24]) pylib = p[idx+24:idx+88].split(b'\0',1)[0] toc_start = idx - toc_len pkg_start = toc_start - toc_offset print('idx',idx,'toc_start',toc_start,'pkg_start',pkg_start,'pyver',pyver,pylib) toc = p[toc_start:idx] pos=0; entries=[] while pos < len(toc): entry_size = struct.unpack('!I', toc[pos:pos+4])[0] if entry_size<=0 or pos+entry_size>len(toc): print('bad entry at',pos,'size',entry_size); break entryPos, cmprsdSize, uncmprsdSize = struct.unpack('!III', toc[pos+4:pos+16]) cmprsFlag = toc[pos+16] typeChar = chr(toc[pos+17]) name_raw = toc[pos+18:pos+entry_size] name = name_raw.split(b'\0',1)[0].decode('utf-8','ignore') entries.append((name, entryPos, cmprsdSize, uncmprsdSize, cmprsFlag, typeChar, entry_size)) pos += entry_size print('entries',len(entries)) for e in entries: print(e) PY

Extracting and inspecting script entry

I need to extract the implant script, likely corresponding to a type 's' entry, which could be the source .pyc file. If it’s compressed, I might try decompressing it with zlib to reveal the pyc or the script text. Let’s extract this entry and investigate further.

bash -lc python - <<'PY' from pathlib import Path import struct,zlib,marshal,types,dis p=Path('/mnt/data/out_reconstructed').read_bytes() idx=p.rfind(b'MEI\x0c\x0b\x0a\x0b\x0e') pkglen,toc_offset,toc_len,pyver = struct.unpack('!IIII', p[idx+8:idx+24]) toc_start = idx - toc_len pkg_start = toc_start - toc_offset toc=p[toc_start:idx] pos=0; entries=[] while pos < len(toc): entry_size=struct.unpack('!I', toc[pos:pos+4])[0] entryPos, cmprsdSize, uncmprsdSize = struct.unpack('!III', toc[pos+4:pos+16]) cmprsFlag=toc[pos+16]; typ=chr(toc[pos+17]); name=toc[pos+18:pos+entry_size].split(b'\0',1)[0].decode('utf-8','ignore') entries.append((name,entryPos,cmprsdSize,uncmprsdSize,cmprsFlag,typ)) pos += entry_size for name,entryPos,cmprsdSize,uncmprsdSize,cmprsFlag,typ in entries: if name=='implant': blob=p[pkg_start+entryPos:pkg_start+entryPos+cmprsdSize] if cmprsFlag: blob=zlib.decompress(blob) print('typ',typ,'len',len(blob),'first40',blob[:40]) open('/tmp/implant.bin','wb').write(blob) try: co=marshal.loads(blob[16:]) print('marshaled after 16 ok',co) except Exception as e: print('marshal16 fail',e) for off in [0,8,12,16]: try: co=marshal.loads(blob[off:]) print('marshal at',off,'ok type',type(co), 'name', getattr(co,'co_name',None)) if isinstance(co, types.CodeType): print('consts',co.co_consts[:10]) print('names',co.co_names[:20]) dis.dis(co) break except Exception as e: pass PY

Inspecting code object in the blob

The blob seems to contain a code object directly, starting with marshal data e3 (for Python 3.9). I’ll use marshal.loads(blob) to load it and then possibly decompile it using dis, so I can inspect the constants and understand the content better. Let's proceed with that!

bash -lc python - <<'PY' from pathlib import Path import struct,zlib,marshal,types,dis p=Path('/mnt/data/out_reconstructed').read_bytes() idx=p.rfind(b'MEI\x0c\x0b\x0a\x0b\x0e') pkglen,toc_offset,toc_len,pyver = struct.unpack('!IIII', p[idx+8:idx+24]) toc_start = idx - toc_len pkg_start = toc_start - toc_offset toc=p[toc_start:idx] pos=0; entries=[] while pos < len(toc): entry_size=struct.unpack('!I', toc[pos:pos+4])[0] entryPos, cmprsdSize, uncmprsdSize = struct.unpack('!III', toc[pos+4:pos+16]) cmprsFlag=toc[pos+16]; typ=chr(toc[pos+17]); name=toc[pos+18:pos+entry_size].split(b'\0',1)[0].decode('utf-8','ignore') entries.append((name,entryPos,cmprsdSize,uncmprsdSize,cmprsFlag,typ)) pos += entry_size blob=None for name,entryPos,cmprsdSize,uncmprsdSize,cmprsFlag,typ in entries: if name=='implant': blob=p[pkg_start+entryPos:pkg_start+entryPos+cmprsdSize] if cmprsFlag: blob=zlib.decompress(blob) break co=marshal.loads(blob) print('loaded', type(co), co.co_name, co.co_filename) print('names', co.co_names) print('consts', co.co_consts) print('---dis---') dis.dis(co) PY

Dealing with potential version issues for code object

I think the issue might be that the Python version mismatch is making the marshal.loads fail since we're running 3.13 and the format could have changed. I'll explore other options, like using xdis or even installing a compatible version of Pycdc or decompyle3. But no internet, so I’ll check if xdis is available locally or

bash -lc which python3.9 || true; /usr/bin/python3.9 -V 2>/dev/null || true; ls /usr/bin/python3* | sed -n '1,20p'

Exploring potential tools for decompiling

I’m considering using marshalparser or maybe xdis to handle this, but those aren’t fitting. I could try parsing the code object with uncompyle6, if it’s installed. Since internet may be down, maybe I can use a container exec to access the internal cache and install things that way. Let's give it a go.

bash -lc python -m pip install xdis decompyle3 uncompyle6 -q

Proceeding with decompiling

The installation seems to have completed successfully, though there was no output. Now, I need to move forward with decompiling the code. Let's see if everything works as expected now that the necessary steps are in place. Hope it goes smoothly from here!

bash -lc python - <<'PY' from pathlib import Path import struct,zlib from xdis.load import load_module_from_file_object from xdis.magics import magicint2version import marshal, types p=Path('/mnt/data/out_reconstructed').read_bytes() idx=p.rfind(b'MEI\x0c\x0b\x0a\x0b\x0e') pkglen,toc_offset,toc_len,pyver = struct.unpack('!IIII', p[idx+8:idx+24]) toc_start = idx - toc_len pkg_start = toc_start - toc_offset toc=p[toc_start:idx] pos=0; entries=[] while pos < len(toc): entry_size=struct.unpack('!I', toc[pos:pos+4])[0] entryPos, cmprsdSize, uncmprsdSize = struct.unpack('!III', toc[pos+4:pos+16]) cmprsFlag=toc[pos+16]; typ=chr(toc[pos+17]); name=toc[pos+18:pos+entry_size].split(b'\0',1)[0].decode('utf-8','ignore') entries.append((name,entryPos,cmprsdSize,uncmprsdSize,cmprsFlag,typ)) pos += entry_size for target in ['implant','pyiboot01_bootstrap','pyi_rth_inspect']: for name,entryPos,cmprsdSize,uncmprsdSize,cmprsFlag,typ in entries: if name==target: blob=p[pkg_start+entryPos:pkg_start+entryPos+cmprsdSize] if cmprsFlag: blob=zlib.decompress(blob) out=Path(f'/tmp/{target}.pyc') # add pyc header for python 3.9? magic + flags + mtime/hash padding. Use known magic from pyver maybe not needed by xdis? # Try xdis load on file object containing raw marshal? maybe expects pyc. We'll create faux pyc. import importlib.util # Python 3.9 magic number bytes likely 0x61 0d 0d 0a seen in PYZ header; use that. header = b'\x61\x0d\x0d\x0a' + b'\0'*12 out.write_bytes(header + blob) print('wrote',out,len(blob),typ) PY python - <<'PY' from pathlib import Path from decompyle3.main import decompile_file for fn in ['/tmp/implant.pyc','/tmp/pyiboot01_bootstrap.pyc','/tmp/pyi_rth_inspect.pyc']: print('===',fn,'===') try: decompile_file(fn, outstream=None) except Exception as e: print('err',e) PY

Exploring decompiling options

Okay, it seems like decompyle3 might not work with Python 3.9, which is odd. Maybe uncompyle6 could support it instead? I was also considering using pycdc, but that doesn't look promising. Instead, I think I'll try using xdis since it can show disassembly—let's see how that goes!

bash -lc python - <<'PY' import uncompyle6 print('uncompyle6', uncompyle6.version) PY uncompyle6 /tmp/implant.pyc 2>&1 | sed -n '1,200p'

Evaluating disassembly options for Python 3.9

I’m thinking of trying xdis to get the disassembly for Python 3.9, maybe it supports it. Alternatively, parsing the Pyz or embedded source might give me enough to inspect the names and constants of the code objects without needing to fully decompile. I’ll look through the xdis docs or try the module with a command to get more insight.

bash -lc python -m xdis /tmp/implant.pyc | sed -n '1,260p'

Inspecting PYZ and xdis output for logic

I think continuing with xdis might give me enough to infer some logic or maybe identify a flag. I'll run more lines for the disassembly. There's also the settings module, which might contain useful data. I need to extract and inspect the PYZ contents, looking for specific flags or keys. I'll start by parsing it and checking how it connects, particularly looking to see if the implant interacts with a server.

Disassembling functions and inspecting settings

I'll start by disassembling the main function and possibly send_data/recv_data to get a better understanding of the protocol, then check out the settings module in the PYZ. I might find the flag there or maybe the settings have an encrypted payload or command list. I'll use xdis or pydisasm to output the nested code and save it for later inspection.

bash -lc python -m xdis /tmp/implant.pyc > /tmp/implant.dis && sed -n '260,520p' /tmp/implant.dis

Inspecting further lines of main function

I think I need to check more lines of the main function to get a clearer picture. There might be more relevant information hidden further down that could help me understand the protocol and flow of things better. I’ll follow up with the rest of it.

bash -lc sed -n '520,760p' /tmp/implant.dis

Continuing with main function disassembly

I need to proceed with disassembling more lines of the main function. This might give me further insight into the logic or help highlight any important behavior. I’ll keep going with the rest of it and analyze as I move forward.

bash -lc sed -n '760,980p' /tmp/implant.dis

Inspecting the settings module in PYZ

I need to look at the settings module in the implant, which is imported from a PYZ file. I'll extract and parse the PYZ, focusing on its header, which starts with b'PYZ\0' followed by the pyc magic bytes and a tocpos (possibly in big-endian format). Let’s check the first few bytes.

bash -lc python - <<'PY' from pathlib import Path import struct,zlib,binascii p=Path('/mnt/data/out_reconstructed').read_bytes()

extract PYZ blob

import struct idx=p.rfind(b'MEI\x0c\x0b\x0a\x0b\x0e') pkglen,toc_offset,toc_len,pyver = struct.unpack('!IIII', p[idx+8:idx+24]) toc_start = idx - toc_len pkg_start = toc_start - toc_offset toc=p[toc_start:idx] pos=0; pyz=None while pos < len(toc): es=struct.unpack('!I',toc[pos:pos+4])[0] ep,cs,us=struct.unpack('!III',toc[pos+4:pos+16]); cf=toc[pos+16]; typ=chr(toc[pos+17]); name=toc[pos+18:pos+es].split(b'\0',1)[0].decode('utf-8','ignore') if name=='PYZ.pyz': pyz=p[pkg_start+ep:pkg_start+ep+cs] break pos+=es print('pyz len',len(pyz), 'head', repr(pyz[:40]), 'hex', binascii.hexlify(pyz[:20]))

save

Path('/tmp/PYZ.pyz').write_bytes(pyz) PY

Parsing the PYZ TOC structure

I need to parse the PYZ file’s TOC. The format starts with b'PYZ\0' followed by the pyc magic, tocpos, and maybe a hash. After the magic bytes, the tocpos seems to point to a position, potentially near the end of the PYZ file, leaving room for the TOC. I'll check if this format fits.

Examining compressed data after TOC

I think after the TOC at offset 12, there's compressed data starting with a zlib header (78 9c). But there's a pattern of five zero bytes before it. I wonder if the first item starts with typecode 0. I'll inspect the area around the TOC position to verify this.

bash -lc python - <<'PY' from pathlib import Path import binascii,struct pyz=Path('/tmp/PYZ.pyz').read_bytes() for off in [0,12,16,17,0x0f38b6, len(pyz)-100]: print('off',off, repr(pyz[off:off+60])) print('tocpos guess', struct.unpack('!I', pyz[8:12])[0]) PY

Parsing TOC and loading marshal dictionary

Alright, I need to parse the TOC at offset 997558, which points to a marshal dictionary. Since it's likely simple objects, I think I can load it using marshal.loads. I’ll try that and parse the dictionary from there, skipping unnecessary sections for now. Let’s go for it.

bash -lc python - <<'PY' from pathlib import Path import marshal, struct pyz=Path('/tmp/PYZ.pyz').read_bytes() tocpos=struct.unpack('!I', pyz[8:12])[0] for off in [tocpos, tocpos+1, tocpos-1]: try: obj=marshal.loads(pyz[off:]) print('off',off,'type',type(obj),'len',len(obj) if hasattr(obj,'len') else None) if isinstance(obj, dict): print('first keys', list(obj.keys())[:20]) break except Exception as e: print('off',off,'err',e) PY

Inspecting list of tuples

It seems like the list has 158 entries, and it's probably a list of tuples. I’ll need to take a closer look at the individual entries to figure out their structure and meaning. Let's examine them carefully to understand their content.

bash -lc python - <<'PY' from pathlib import Path import marshal, struct, pprint pyz=Path('/tmp/PYZ.pyz').read_bytes() tocpos=struct.unpack('!I', pyz[8:12])[0] obj=marshal.loads(pyz[tocpos:]) print(type(obj), len(obj)) for item in obj[:20]: print(item) PY

Inspecting settings module in PYZ

I need to extract the settings module from the PYZ file, as it might also contain some flags or IP addresses. Let's carefully check the TOC entries around where the settings module is located to understand its structure and content.

bash -lc python - <<'PY' from pathlib import Path import marshal, struct pyz=Path('/tmp/PYZ.pyz').read_bytes() toc=marshal.loads(pyz[struct.unpack('!I',pyz[8:12])[0]:]) for item in toc: if 'settings' in item[0] or 'implant' in item[0]: print(item) PY

Extracting and inspecting settings module

I need to extract and disassemble the settings module, as it might contain useful constants or flags. It could also be worth checking other potentially interesting modules. I'll start with extracting settings.pyc and investigate the PYZ entries, which might be compressed data. Let's try parsing and extracting them.

bash -lc python - <<'PY' from pathlib import Path import marshal, struct, zlib pyz=Path('/tmp/PYZ.pyz').read_bytes() toc=marshal.loads(pyz[struct.unpack('!I',pyz[8:12])[0]:]) for name,(ispkg,pos,length) in toc: if name=='settings': data=pyz[17+pos:17+pos+length] print('raw head',data[:20]) try: dec=zlib.decompress(data) print('dec len',len(dec), 'head',dec[:20]) Path('/tmp/settings.bin').write_bytes(dec) except Exception as e: print('zlib err',e) PY

Investigating PYZ encryption and compression

I need to investigate the PYZ encryption, as PyInstaller's PYZ entries might be zlib-compressed, and possibly encrypted too. However, by default, there’s no encryption. The first byte isn’t a zlib header, so maybe the data is compressed with zlib but stored at a different position? I'll check by extracting a module like 'base64' and verifying the data at offset 17.

bash -lc python - <<'PY' from pathlib import Path import marshal, struct, zlib pyz=Path('/tmp/PYZ.pyz').read_bytes() toc=marshal.loads(pyz[struct.unpack('!I',pyz[8:12])[0]:]) for name in ['base64','settings']: ispkg,pos,length = dict(toc)[name] print('\nname',name,'tuple',dict(toc)[name]) for base in [0,12,16,17]: data=pyz[base+pos:base+pos+length] ok=False try: dec=zlib.decompress(data) print(' base',base,'zlib ok len',len(dec),'head',dec[:10]) ok=True except Exception as e: print(' base',base,'zlib fail',str(e)[:50],'head',data[:4]) PY

Extracting settings and using xdis

I’ll start from the file’s base zero and extract the settings. After that, I could use xdis by adding the header for functionality. Let’s proceed with that and see where it gets us.

bash -lc python - <<'PY' from pathlib import Path import marshal, struct, zlib pyz=Path('/tmp/PYZ.pyz').read_bytes() toc=dict(marshal.loads(pyz[struct.unpack('!I',pyz[8:12])[0]:])) for name in ['settings','base64']: ispkg,pos,length = toc[name] dec=zlib.decompress(pyz[pos:pos+length]) Path(f'/tmp/{name}.pyc').write_bytes(b'\x61\x0d\x0d\x0a' + b'\0'*12 + dec) print(name,len(dec)) PY python -m xdis /tmp/settings.pyc | sed -n '1,260p'

Searching for the flag and testing ideas

I’m considering if the flag could be related to the C2 address, 10.1.243.155:7788, but the format doesn't seem to match. I’ll check module names to see if one might contain the flag. The implant likely functions as a reverse shell, and the flag could be from the server response post-execution. I’ll try to test some possibilities.

Exploring connection options for local IP

The connection to 155:7788 is refused by the network, so I’m considering how to make that IP local. Maybe by using network namespace, or perhaps an easier approach would be to use socat or a Python server on 0.0.0.0:7788 and add a route. I’ll check if ip exists and review permissions.

bash -lc which ip || true; id; ip addr show lo | sed -n '1,40p'

Testing local listener with AES key and protocol emulation

I’m thinking of assigning 10.1.243.155 to the loopback, running a local listener on 7788, and then executing the implant. Since we have the AES key, we could try emulating the C2 to get a response or flag. But I’m not sure yet if the flag is somewhere else, like on the victim's machine.

bash -lc python - <<'PY' import dpkt,socket from collections import Counter PCAP='/mnt/data/traffic_hunt.pcapng' ct=Counter() with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) pair=tuple(sorted([(sip,tcp.sport),(dip,tcp.dport)])) ct[pair]+=1 except: pass for pair,count in ct.most_common(20): print(count,pair) PY

Narrowing search to port 7788

I’ve got a good number of packets, but I’ll focus on port 7788 specifically. The top 20 packets involved port 8080, but just in case, the relevant ones for this task might not be in those. Let’s see if filtering for port 7788 reveals anything useful.

bash -lc python - <<'PY' import dpkt,socket PCAP='/mnt/data/traffic_hunt.pcapng' ports=[] with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if sip=='10.1.33.69' and dip=='10.1.243.155' and tcp.dport==7788: ports.append((ts,tcp.sport,tcp.dport,len(tcp.data),tcp.flags)) if sip=='10.1.243.155' and dip=='10.1.33.69' and tcp.sport==7788: ports.append((ts,tcp.sport,tcp.dport,len(tcp.data),tcp.flags)) except: pass print('count',len(ports)) for x in ports[:20]: print(x) PY

Preparing to decode session data

I need to decode the 7788 session using the same AES key. It seems the implant connects from the victim's port 38162 to the attacker's port 7788. Based on the packet flow, commands are likely sent from the attacker to the implant, encrypted and length-prefixed. I'll decrypt and parse according to the implant protocol. Let’s proceed!

bash -lc python - <<'PY' import dpkt,socket,struct,base64 from collections import defaultdict from cryptography.hazmat.primitives.ciphers.aead import AESGCM PCAP='/mnt/data/traffic_hunt.pcapng'

AES key from execution request

key=base64.b64decode('IhbJfHI98nuSvs5JweD5qsNvSQ/HHcE/SNLyEBU9Phs=') aes=AESGCM(key)

reassemble stream for 10.1.33.69:38162 <-> 10.1.243.155:7788

c2s={} # victim->attacker (responses) s2c={} # attacker->victim (commands) with open(PCAP,'rb') as f: for ts,buf in dpkt.pcapng.Reader(f): try: eth=dpkt.ethernet.Ethernet(buf); ip=eth.data; tcp=ip.data if not isinstance(ip, dpkt.ip.IP) or not isinstance(tcp, dpkt.tcp.TCP): continue sip=socket.inet_ntoa(ip.src); dip=socket.inet_ntoa(ip.dst) if sip=='10.1.33.69' and dip=='10.1.243.155' and tcp.sport==38162 and tcp.dport==7788: c2s[tcp.seq]=bytes(tcp.data) elif sip=='10.1.243.155' and dip=='10.1.33.69' and tcp.sport==7788 and tcp.dport==38162: s2c[tcp.seq]=bytes(tcp.data) except: pass

def join(segdict): out=b''; cur=None for seq,data in sorted(segdict.items()): if not data: continue if cur is None: out+=data; cur=seq+len(data); continue if seq>=cur: out+=data; cur=seq+len(data) else: ov=cur-seq if ov < len(data): out+=data[ov:]; cur+=len(data)-ov return out cmdblob=join(s2c); respblob=join(c2s) print('cmdblob',len(cmdblob),'respblob',len(respblob))

def parse_msgs(blob): pos=0; out=[] while pos+4 <= len(blob): n=struct.unpack('i', blob[pos:pos+4])[0] if n<0 or n>10_000_000 or pos+4+n>len(blob): print('bad len at',pos,'n',n,'remaining',len(blob)-pos) break enc=blob[pos+4:pos+4+n] nonce=enc[:12]; ct=enc[12:] try: pt=aes.decrypt(nonce, ct, None) except Exception as e: pt=b'<DECRYPTFAIL %s>'%str(e).encode() out.append(pt) pos += 4+n return out,pos cmds,cpos=parse_msgs(cmdblob) resps,rpos=parse_msgs(respblob) print('parsed cmds',len(cmds),'pos',cpos,'parsed resps',len(resps),'rpos',rpos) for i,x in enumerate(cmds,1): print('CMD',i,repr(x[:200])) for i,x in enumerate(resps,1): print('RESP',i,repr(x[:500])) PY

Analyzing potential flag format and encryption

I need to figure out how to derive the flag from the long string that could be encoded. It seems like it might be ciphertext or base64, potentially requiring an AES key for decryption. The challenge asks for a flag in the format dart{}, so I’ll inspect and decode the string—possibly base64 or encrypted. Let’s dive in!

bash -lc python - <<'PY' import base64,binascii,hashlib s='3SoX7GyGU1KBVYS3DYFbfqQ2CHqH2aPGwpfeyvv5MPY5Dm1Wt9VYRumoUvzdmoLw6FUm4AMqR5zoi' print('len',len(s))

try base64 with padding

for alt in [s, s+'=', s+'==', s+'===']: try: b=base64.b64decode(alt) print('b64 len',len(b), 'hex', b[:32].hex()) print('ascii', b) break except Exception as e: pass

urlsafe

for alt in [s, s+'=', s+'==', s+'===']: try: b=base64.urlsafe_b64decode(alt) print('urlsafe len',len(b), 'hex', b[:32].hex()) print('ascii maybe', b) break except Exception as e: pass PY

Investigating potential encoding and string issues

I’m noticing that the base64 length is 77, which is invalid—77 mod 4 equals 1, an impossible result. Maybe it’s a typo or truncated? It's possible the command echoed a long string without quotes or that some characters were lost due to a read bug. I'll inspect the decrypted response further and test different encoding formats like Base58 or Base62. This might help pinpoint the flag. Let’s dig deeper!

Analyzing potential flag encoding

Thinking about command 4, it might just be testing the network with a random token and not the flag. However, considering the challenge's hint that the flag comes from traffic, the attacker might have intentionally echoed an encrypted flag. I need to check the timing and see if the string could be an encoded flag. It could be Base58 or another encoded format. Let’s carefully analyze its character set.

bash -lc python - <<'PY' s='3SoX7GyGU1KBVYS3DYFbfqQ2CHqH2aPGwpfeyvv5MPY5Dm1Wt9VYRumoUvzdmoLw6FUm4AMqR5zoi' alpha='123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'

base58 decode

n=0 for ch in s: n=n*58+alpha.index(ch)

convert to bytes

out=n.to_bytes((n.bit_length()+7)//8,'big')

leading ones -> zero bytes

pad=0 for ch in s: if ch=='1': pad+=1 else: break out=b'\x00'*pad+out print('len',len(out), 'hex', out.hex()) print('ascii', out) PY

Decoding base64 of ASCII

I need to decode a base64-encoded ASCII string. Let's make sure I'm handling this properly so I can convert it back to its original format and check if the result is meaningful. I'll stick to the direct approach of first decoding and then interpreting the output.

bash -lc python - <<'PY' import base64 s=b'ZGFydHtkOTg1MGIyNy04NWNiLTQ3NzctODVlMC1kZjBiNzhmZGI3MjJ9' print(base64.b64decode(s)) PY

Thought for 38m 25s

回答

找到的 flag 是:

dart{d9850b27-85cb-4777-85e0-df0b78fdb722}

简要过程: 先从 8080 流量里还原出上传的 /var/tmp/out,再发现它会反连 10.1.243.155:7788。用请求里给的 --aes-key 解开 7788 会话后,能看到攻击者下发了几条命令:

  • pwd
  • ls
  • echo Congratulations
  • echo 3SoX7GyGU1KBVYS3DYFbfqQ2CHqH2aPGwpfeyvv5MPY5Dm1Wt9VYRumoUvzdmoLw6FUm4AMqR5zoi
  • echo bye

其中这串长字符串先做 Base58 解码,得到一段 Base64ZGFydHtkOTg1MGIyNy04NWNiLTQ3NzctODVlMC1kZjBiNzhmZGI3MjJ9

再 Base64 解码,得到最终 flag: dart{d9850b27-85cb-4777-85e0-df0b78fdb722}

このQ&Aを共有