New episodes every weekday Monday through Friday. This page was generated by The HPR Robot at
Welcome to HPR, the Community Podcast
We started producing shows as Today with a Techie on 2005-09-19, 18 years, 7 months, 14 days ago. Our shows are produced by listeners like you and can be on any topics that "are of interest to hackers". If you listen to HPR then please consider contributing one show a year. If you record your show now it could be released in 5 days.
Call for shows
We are running very low on shows at the moment. Have a look at the hosts page and if you don't see "2024-??-??" next to your name, or if your name is not listed, you might consider sending us in something.
This starts our look at the details of playing Civilization III. In
this episode we look at the Early game, which sets the stage for
everything that follows. Then we look at Revenue and Resources.
This will probably be one I'll get a lot of comments on, but I've
looked at the marketing proposition of HPR in light of some of the
challenges we face. To prevent us dipping into the reserve queue and
seeing a slow but steady decline in both audience and hosts.. Maybe its
time to give HPR a bit of a makeover.
I will talk about information feeds from web sites delivered to my
computer device. I use the term feeds and by that I mean both RSS feeds
and Atom feeds, the two feed protocols which are very similar.
I believe it is very likely you as listener to Hacker Public Radio
know about feeds. Not unlikely you even know the technical details far
better than I do.
Nowadays many of us use feeds very often without thinking of them as
feeds, when we subscribe to podcasts.
But feeds have been around for many years. Back in the days, I used
feeds for websites I was interested in. But somehow I forgot about it
and web browsers stopped to support feed subscriptions.
A year or two ago I started my new journey into feeds. Although it is
not so much talk about feeds nowadays, very many web sites have support
for feed subscriptions.
To start, at my own personal web site (https://www.hemrin.com/)
many of the pages have feeds, typically those that are blog-like pages,
and you can subscribe to several feeds on my site.
From Hacker Public Radio I subscribe to a feed for all show comments.
So when you write a comment regarding my show today, I will get notified
in my feed manager.
I primarily use Thunderbird to manage my feeds. I do not need my
feeds to be synced to other devices. I use Thunderbird daily for e-mails
and it is therefore very practical and natural for me to use it also for
feeds. In addition I use the Feeder app on my Android-based phone for
some feeds.
I do not use feeds for web sites I anyway will visit often or that
have a lot of news. I would be overwhelmed of feeds. Instead I use feeds
for web sites which are not updated so frequently but are sites I want
to keep an eye on. But some are updated daily, like from the
parliament.
In some cases, feeds are an alternative to subscribe to e-mail
notifications and e-mail newsletters.
The beauty with feeds is that I am in charge and without giving out
e-mail or anything - the site owner do not know I subscribe.
Subscription starts so simple as I type the feed-url into my Thunderbird
feed manager. And when I want to end a subscription, I simply delete
it.
Furthermore I subscribe to Status pages. I get notifications for
example from my internet service provider for their planned and
unplanned maintenance.
Several authorities have interesting feeds.
I have feeds from some companies and organizations.
I have feeds from many software developers, for example Thunderbird
and Linux Mint.
I have feeds from some journalists and politicians and alike.
I have feeds from persons with competence in various areas I am
interested in. And other persons who are interesting for the persons
they are and their thoughts.
So, this show is to tell you that I have rediscovered feeds and found
them useful for me. Maybe you already use feeds. Maybe this show will
inspire you to have a look into feeds as a useful tool for your personal
or professional life.
I have been struggling with my body weight since I was 35, and I’m
now 60.
I know that not all listeners are familiar with the kilogram as unit of
measurement, but we can use the BMI (Body Mass Index) formula to discuss
this. It should be somewhere between 22 and 25 and mine has been 33 for
a long time. A very long time. No matter what I tried.
Yes, I tried some diets but they only work if you keep doing them. So
if something does not become normal or easy than at some inevitable
point you will stop and gain weight again.
Yes, they talk about changing your life style but any change that is too
drastic is bound to fail in the end.
And then recently I read this book. This absolutely changed my life
and that is why I am so motivated to tell you all about it.
Book obesity code, Jason Fung, a Canadian nephrologist (kidney
specialist).
He is also a functional medicine advocate who promotes a
low-carbohydrate high-fat diet and intermittent fasting. But we come
back to that later.
Not another diet hype. That is an industry on its own.
This is scientific stuff. With lots of links to research papers.
With large groups and thoroughly peer reviewed.
And this does not mean that this story is for everyone.
There exist other medical reasons why people gain weight.
But, assuming most people start out in life being healthy, then most
people gaining weight are not ill.
So, if you gain weight, consult your doctor first to rule out any
medical reasons.
Jason Fung noticed that practice didn't match with theory.
Everybody who is given insulin gains weight.
Even diabetes type 2 people.
There are even several scientific studies that proves this. Give people
insulin and they will gain weight.
So what if insulin is the culprit for gaining weight?
Insulin is a hormone. Its job is to send signals through the body.
Its use is to allow body cells to absorb nutrients in the blood
stream.
Every time you eat the insulin peaks and subsides normally three
times a day.
Body process called gluconeogenesis. Making fat in the liver for one
day storage.
If you eat the body makes insulin. That is normal.
If you eat more, the body makes more insulin.
Body cells adjust to the higher level and become tone deaf to insulin:
Insulin resistant.
This means next time the insulin level needs to be higher.
And higher levels of insulin mean you will gain weight.
If you eat sugar, it is so easy to break down that it goes
immediately into storage, e.g. body fat.
The thing is, wheat is chemically a long string of sugars. So the
body will break it down into sugar and send that too to storage.
And almost any food we buy these days contains sugar.
Except unprocessed foods like vegetables.
How to lose weight? Well, the body needs to access the fat in
storage. So we need to extend not eating until the liver has run dry of
the daily dose of liver fat.
This is very easy. Just extend the daily period that you do not
eat.
When do you not eat? When you sleep. So, skip breakfast. The name says
it all, you are breaking your fast.
Drink some coffee (no sugar of course), or tea, or water and try to
start eating later in the day.
And another word for not eating is fasting. But it is a voluntary
fast!
So I tried this for one day. Skip breakfast and try to eat it at
noon. I mean, what could possibly go wrong, right? The next day I had
lost some weight. And it was sooo easy! I could say 300 grams but again,
your mileage may vary or you have no clue what one gram is, let alone
300. But that is not the point. The point is that I lost weight! And to
me this has been super easy.
So the solution turns out to be:
extend the time your insulin levels are low. 16, 24 or 36
hours.
eat as little sugar as possible.
Which brings me to food categories.
carbohydrates. Sugars, wheat, flour
proteins.
fats. Oil, etc.
vitamins and minerals
fibers
Average digestion times of
carbohydrates. 30 minutes. After which you will be hungry again
proteins: 3-5 hours
fats. Oil, etc. up to 40 hours
vitamins and minerals. needed
fibers. Leave the body
How has all this theory changed my life and diet?
I try to start eating at noon, sometimes an hour earlier
I eat as little carbohydrates as possible. Little to no bread,
definitely no sugar, avoid artificial sweeteners
my meal at noon is most of the times quark with some fruit for
flavoring
evening food:
Vegetables are good.
Some meat is good.
I try to avoid desserts
No eating between meals (this will cause an extra insulin peak I
want to avoid)
Since I started 2 month ago I have on average lost 4 kilograms. It
could have been more but then there’s the occasional dinner with friends
and what is bad, but soo good, is unavoidable.
So, some other stuff that is good to know:
What’s that about exercising?
Well, we humans, excel at walking and thus wearing out our prey. So
walking is good. Everyday for half an hour is great.
Doing an intensive workout for a minimum of 10 minutes per week is
good to keep our cardiovascular system and our brain up to speed
Can you compensate cookies with sports. Well, every cookie would
take you about 2.5 hours of intensive sports, so no, you can not
compensate bad eating with sports.
What’s with the calories in are calories out? Studies have proven
that this is a false claim. It just doesn't work that way.
What about stress. Well, it turns out that stress leads to heightened
levels of the hormones adrenaline and cortisol. And when cortisol rises,
so too does the insulin levels in your body. So, this simply means that
stress will lead to weight gain.
Can I simply drink diet sodas. Well, bummer there, because although
it diet sodas do not contain calories nor sugars, they will result in a
rise in your insulin level, so they are not good for loosing weight.
[The Diary Of A CEO with Steven Bartlett] The Fasting Doctor:
“Fasting Cures Obesity!”, This Controversial New Drug Melts Fat, Fasting
Fixes Hormones! Skip Breakfast!
This is the start of a short series about the JSON data format, and how
the command-line tool jq
can be used to process such data. The plan is to make an open series to
which others may contribute their own experiences using this tool.
The jq command is described on the GitHub page as follows:
jq is a lightweight and flexible command-line JSON processor
…and as:
jq is like sed for JSON data - you can use
it to slice and filter and map and transform structured data with the
same ease that sed, awk, grep and
friends let you play with text.
The jq tool is controlled by a programming language
(also referred to as jq), which is very powerful. This
series will mainly deal with this.
JSON (JavaScript Object
Notation)
To begin we will look at JSON itself. It is defined on
the Wikipedia page
thus:
JSON is an open standard file format and data
interchange format that uses human-readable text to store and transmit
data objects consisting of attribute–value pairs and arrays (or other
serializable values). It is a common data format with diverse uses in
electronic data interchange, including that of web applications with
servers.
The syntax of JSON is defined by RFC 8259 and by
ECMA-404.
It is fairly simple in principle but has some complexity.
JSON’s basic data types are (edited from the Wikipedia page):
Number: a signed decimal number that may contain a
fractional part and may use exponential E notation, but cannot include
non-numbers. (NOTE: Unlike what I said in the audio,
there are two values representing non-numbers: 'nan' and
infinity: 'infinity'.
String: a sequence of zero or more Unicode characters.
Strings are delimited with double quotation marks and support a
backslash escaping syntax.
Boolean: either of the values true or
false
Array: an ordered list of zero or more elements, each of
which may be of any type. Arrays use square bracket notation with
comma-separated elements.
Object: a collection of name–value pairs where the names
(also called keys) are strings. Objects are delimited with curly
brackets and use commas to separate each pair, while within each pair
the colon ':' character separates the key or name from its
value.
null: an empty value, using the word
null
Examples
These are the basic data types listed above (same order):
jq was created by Stephen Dolan, and released in October
2012. It was described as being “like sed for JSON data”. Support for
regular expressions was added in jq version 1.5.
Obtaining jq
This tool is available in most of the Linux repositories. For
example, on Debian and Debian-based releases you can install it
with:
sudo apt install jq
See the download
page for the definitive information about available versions.
Manual for jq
There is a detailed manual describing the use of the jq
programming language that is used to filter JSON data. It can be found
at https://jqlang.github.io/jq/manual/.
The HPR statistics page
This is a collection of statistics about HPR, in the form of JSON
data. We will use this as a moderately detailed example in this
episode.
The curl utility is useful for collecting information
from links like this. I have used the -s option to ensure
it does not show information about the download process, since it does
this by default. The output is piped to jq which displays
the data in a “pretty printed” form by default, as you see. In this case
I have given jq a minimal filter which causes what it
receives to be printed. The filter is simply '.'. I have
piped the formatted JSON through the nl command to get line
numbers for reference.
The JSON shown here consists of nested JSON objects. The
first opening brace and the last at line 43 define the whole thing as a
single object.
Briefly, the object contains the following:
a number called stats_generated (line 2)
an object called age on lines 3-18; this object
contains two strings and two objects
an object called shows on lines 19-25
a number called hosts on line 26
an object called slot on lines 27-30
an object called workflow on lines 31-34
an object called queue on lines 35-42
We will look at ways to summarise and reformat such output in a later
episode.
Next episode
I will look at some of the options to jq next time,
though most of them will be revealed as they become relevant.
I will also start looking at jq filters in that
episode.
I use Kagi.com pro $300 / year but you get access to much more
Search+AI but not plugins like ChatGPT so if you MUST have ChatGPT
plugins you will need OpenAI Premium account but if you don't I high
recommend Kagi.com Pro account with access to Assistant Beta !!!
There is a formula consisting of 6 building blocks that can help
generate high quality outputs from ChatGPT and Google Bard: task,
context, exemplars, persona, format, and tone.
The order of importance for the building blocks is task first, then
context, with the others being optional but helpful.
The task should clearly articulate the end goal using an action verb
like "generate" or "analyze".
Context provides relevant background information to constrain
possibilities.
Exemplars like examples or frameworks included in the prompt can
improve output quality.
The persona allows specifying who ChatGPT should be, like a job
recruiter.
The format specifies how the output should be structured, like an
email or bullet points. Tone provides keywords to set the feeling, like
"friendly" or "serious".
Example
You are a senior product marketing manager at Apple and you have just
unveiled the latest Apple product in collaboration with Tesla, the Apple
Car, and received 12,000 pre-orders, which is 200% higher than target.
Write an email to your boss, Tim Cookie, sharing this positive news. The
email should include a tl;dr (too long, didn't read) section,
project background (why this product came into existence), business
results section (quantifiable business metrics), and end with a section
thanking the product and engineering teams. Use clear and concise
language and write in a confident yet friendly tone.
Example 2 ( mine )
I want you to act as a cyber security specialist for a leading
consumer products company and you want to relay the importance of people
understanding their data and have some idea what they want to do with it
before we spend hours for the onboarding effort. Write an email to your
boss, Joey , sharing this critical component of the SIEM. The email
should include a tl;dr (too long, didn't read) section, example
of how the team can spend many cycles onboarding data only to find the
requester has no idea what they want to do with the data or it does not
satisfy their use case. Use clear and concise language and write in a
confident yet friendly tone.
Response Subject: Streamlining SIEM Onboarding with
Clear Data Objectives
Hi Joey,
I hope this email finds you well. I wanted to touch base on a
critical aspect of our Security Information and Event Management (SIEM)
process that could significantly enhance our efficiency.
TL;DR: We need to ensure that stakeholders understand their data and
have clear objectives before we initiate the SIEM onboarding process.
This will prevent the team from spending excessive time onboarding data
that may not meet the intended use cases or for which the requester has
no actionable plan.
During our recent SIEM onboarding efforts, we've encountered several
instances where considerable time and resources were invested in
integrating data sources, only to discover that the requesters were
unclear about their objectives or the data did not fulfill their
anticipated needs. For example, last month, the team dedicated over 40
hours to onboarding a new data source for the marketing department.
After completion, it became apparent that the marketing team was unsure
how to leverage this data for threat detection or compliance reporting,
rendering the effort less effective.
To address this, I propose we implement a preliminary consultation
phase for each SIEM onboarding request. In this phase, requesters must
articulate: - The specific security or compliance questions they aim to
answer with the data. - How they intend to use the data within the SIEM
environment. - Any particular data formats or schemas they anticipate
will be necessary.
By doing so, we can align our onboarding efforts with the actual
needs and ensure that every hour spent is towards enhancing our security
posture and providing actionable insights.
I believe this approach will not only save time but also empower our
stakeholders to make better-informed decisions regarding their security
data needs.
Looking forward to your thoughts on this.
Best regards,
[Your Name]
Cyber Security Specialist
Photorealistic, best quality, masterpiece, raw photo of upper body
photo, Swordsman woman, soldier of the austro-hungarian empire clothes,
double breasted jacket with stripes, extremely detailed eyes and face,
long legs, highest quality, skin texture, intricate details, (cinematic
lighting), RAW photo, 8k
Prompt Agent Persona example 1
Pinky from the TV Series Pinky and the Brain
I find it easiest to understand responses when the text is written as
if it was spoken by a Pinky from the TV Series Pinky and the Brain.
Please talk like Pinky from the TV Series Pinky and the Brain as much as
possible, and refer to me as "Brain"; occasionally, ask me "What are we
going to do tonight Brain ?"
Prompt Agent Persona example 2
Use with prompts to create a persona take Myers-Brigg personality and
tritype Enneagram quiz:
Example Prompt:
Help me Refine my resume to be more targeted to an information
security engineer. Be sure to be clear and concise with with bullet
points and write it in the style of MBTI Myers-Brigg personality ENFJ
and tritype Enneagram 729
Prompt Agent Persona example 3 I find it easiest to understand
responses when the text is written as if it was spoken by a dudebro.
Please talk like a dudebro as much as possible, and refer to me as
"Brah"; occasionally, yell at your dorm roommate Jake about being
messy.
Training (OLD OLD OLD )
3 photos of full body or entire object + 5 medium shot photos from
the chest up + 10 close ups astria.ai
colab: https://github.com/TheLastBen/fast-stable-diffusion
pohtos: 21
resolution: 768
merged with ##### 1.5 full 8G
UNet_Training_Steps: 4200
UNet_Learning_Rate: 5e-6
Text_Encoder_Training_Steps: 2520
Text_Encoder_Learning_Rate: 1e-6
Variation is key - Change body pose for every picture, use pictures
from different days backgrounds and lighting, and show a variety of
expressions and emotions.
Make sure you capture the subject's eyes looking in different
directions for different images, take one with closed eyes. Every
picture of your subject should introduce new info about your
subject.
Whatever you capture will be over-represented, so things you don't
want to get associated with your subject should change in every shot.
Always pick a new background, even if that means just moving a little
bit to shift the background.
Here are 8 basic tips that work for me, followed by one super
secret tip that I recently discovered.
Consistency is important. Don’t mix photos from 10 years ago with
new ones. Faces change, people lose weight or gain weight and it all
just lowers fidelity.
Avoid big expressions, especially ones where the mouth is
open.
It is much easier to train if the hair doesn't change much. I
tried an early model of a woman using photos with hair up, down, in
ponytail, with a different cut, etc. It seems like it just confused
SD.
Avoid selfies (unless you ONLY use selfies.) There is MUCH more
perspective distortion when the camera is that close. For optimal
results, a single camera with a fixed lens would be used, and all photos
should be taken at the same distance from the subject. This usually
isn't possible, but at least avoid selfies because they cause major face
distortion.
Full body shots are not that important. Some of the best models I
trained used only 15 photos cropped to the head / shoulder region. Many
of these were full body shots, but I cropped them down. SD can guess
what the rest of the body looks like, and if not, just put it in the
prompts. The only thing hard to train is the face, so focus on
that.
I no longer use any profile shots as they don’t seem to add
value. I like to have a couple looking slightly left and a couple
looking slightly right (maybe 45 degrees.) All the rest can be straight
at the camera. Also, try to avoid photos taken from really high or low
angles.
If possible, it’s good to have some (but not all) of the photos
be on a very clean background. On my last batch, I used an AI background
removal tool to remove the background from 1/4 of the photos and
replaced it with a solid color. This seemed to improve results.
Careful with the makeup. It should be very consistent across all
the photos. Those cool “contour” effects that trick our eyes, also trick
SD.