Hi
user
Admin Login:
Username:
Password:
Name:
Nightmare on LLM Street: The Perils and Paradoxes of Knowing Your Foe
--client
nbpy
--show
NBPy2024
--room barn 15404 --force
Next: 10 What Python Can Learn From Other Languages
show more...
Marks
Author(s):
Paris Buttfield-Addison
Location
Reis River Ranch
Date
jun Sat 29
Days Raw Files
Start
16:20
First Raw Start
15:40
Duration
00:25:00
Offset
0:39:39
End
16:45
Last Raw End
16:33
Chapters
00:00
0:00:53
Total cuts_time
21 min.
https://pretalx.northbaypython.org/nbpy-2024/talk/JGBYBF
raw-playlist
raw-mp4-playlist
encoded-files-playlist
host
archive
mp4
svg
png
assets
release.pdf
Nightmare_on_LLM_Street_The_Perils_and_Paradoxes_of_Knowing_Your_Foe.json
logs
Admin:
episode
episode list
cut list
raw files day
marks day
marks day
image_files
State:
---------
borked
edit
encode
push to queue
post
richard
review 1
email
review 2
make public
tweet
to-miror
conf
done
Locked:
clear this to unlock
Locked by:
user/process that locked.
Start:
initially scheduled time from master, adjusted to match reality
Duration:
length in hh:mm:ss
Name:
Video Title (shows in video search results)
Emails:
email(s) of the presenter(s)
Released:
Unknown
Yes
No
has someone authorised pubication
Normalise:
Channelcopy:
m=mono, 01=copy left to right, 10=right to left, 00=ignore.
Thumbnail:
filename.png
Description:
As Large Language Models (LLMs) gain trust across various sectors for tasks ranging from generating text to solving complex queries, their influence continues to expand. Yet, this trust is shadowed by significant risks, such as the subtle yet serious threat of data poisoning. This talk will delve into how deceptively crafted data can infiltrate an LLM’s training set, leading these models to propagate errors, biases, or outright fabrications—a real challenge to the integrity of their outputs. While there are various algorithms and approaches designed to mitigate these risks, this session will focus particularly on the Rank-One Model Editing (ROME) algorithm. ROME is notable for its ability to edit an LLM's knowledge in a targeted manner after training, providing a means to recalibrate AI outputs. However, it also presents a potential for misuse, as it can be employed to embed false narratives deeply within a model. Key Discussion Points: - **Why People Trust LLMs**: Exploring the reasons behind the widespread trust in LLMs and the associated risks. - **The Art of Data Poisoning**: A closer look at how maliciously crafted data is inserted into training sets and its profound impact on model behavior. - **Focus on ROME**: Discussing how the Rank-One Model Editing algorithm can both safeguard against and potentially contribute to the corruption of LLMs. - **Ethical Considerations**: Reflecting on the ethical implications of manipulating the knowledge within LLMs, which requires not just technical skill but also wisdom and responsibility. This presentation is designed for data scientists, AI researchers, and Python enthusiasts interested in understanding the vulnerabilities of LLMs and the tools available to protect these systems. While acknowledging other algorithms and methods, this talk will provide a quick demonstration of ROME, offering insights into its utility and dangers. As people continue to integrate LLMs into everything, we must remain vigilant against the risks of data manipulation. This session challenges us to consider whether we are paying enough attention to these threats, or if we are, metaphorically, just fiddling while Rome burns—allowing foundational trust in data to erode. Join me in this exploration of ROME, where we navigate the fine balance between correcting and corrupting the digital minds that are—whether we like it or not—becoming an integral part of our technological landscape.
markdown
Comment:
production notes
barn/2024-06-29/15_40_21.ts
Apply:
16:09:27 - 16:10:21 ( 00:00:53 )
S:
15:40:21 -
E:
16:10:21
D:
00:30:00
(
Start:
1746.204664)
show more...
vlc ~/Videos/veyepar/nbpy/NBPy2024/dv/barn/barn/2024-06-29/15_40_21.ts :start-time=01746.204664 --audio-desync=0
Raw File
Cut List
15:40:21
seconds: 1746.204664
Wall: 16:09:27
Duration
00:30:00
16:10:21
seconds: 0.0
Wall: 15:40:21
Comments:
mp4
mp4.m3u
dv.m3u
Split:
Sequence:
:
delete
barn/2024-06-29/16_10_22.ts
Apply:
16:10:22 - 16:31:23 ( 00:21:01 )
S:
16:10:22 -
E:
16:33:02
D:
00:22:40
(
End:
1261.293415)
show more...
vlc ~/Videos/veyepar/nbpy/NBPy2024/dv/barn/barn/2024-06-29/16_10_22.ts :start-time=00.0 --audio-desync=0
Raw File
Cut List
16:10:22
seconds: 0.0
Wall: 16:10:22
Duration
00:22:40
16:33:02
seconds: 1261.293415
Wall: 16:31:23
Comments:
mp4
mp4.m3u
dv.m3u
Split:
Sequence:
:
delete
barn/2024-06-29/16_10_22.ts
Apply:
16:31:24 - 16:33:02 ( 00:01:38 )
S:
16:10:22 -
E:
16:33:02
D:
00:22:40
(
Start:
1262.0)
show more...
vlc ~/Videos/veyepar/nbpy/NBPy2024/dv/barn/barn/2024-06-29/16_10_22.ts :start-time=01262.0 --audio-desync=0
Raw File
Cut List
16:10:22
seconds: 1262.0
Wall: 16:31:24
Duration
00:22:40
16:33:02
seconds: 0.0
Wall: 16:10:22
Comments:
mp4
mp4.m3u
dv.m3u
Split:
Sequence:
:
delete
Rf filename:
root is .../show/dv/location/, example: 2013-03-13/13:13:30.dv
Sequence:
get this:
check and save to add this
barn/2024-06-29/15_40_21.ts
barn/2024-06-29/16_10_22.ts
Veyepar
Video Eyeball Processor and Review