If you want to start a life in Christ, do just one thing that Jesus taught in an area of your life he spoke about. Faith starts with a single act of trust. Test his wisdom and see if it is not a better way than yours.
Sortis Holdings bought up a number of famous top-flight Portland hospitality businesses and restaurants on a diet of low margins and low interest rates as they faltered during the pandemic. However, all is not well. They have been the target of at least 4 lawsuits by unpaid creditors, employees, contractors and partners since September. They recently failed at an attempt to acquire Ace Group International hotels and are currently facing many suits on unpaid bills.
Which Portland ‘local’ businesses do they own? Bamboo Sushi, Blue Star Donuts, Ava Gene’s, Tusk, Sizzle Pie pizza, the Ace Hotel, Rudy’s Barbershops, and Water Avenue Coffee to name just a few.
AI trends pop up just about every week. The latest is knolling photos. Knolling photos are pictures of arranged objects so they are in parallel or 90-degree angles. The images create an organized and clean portrayal of many related things. They often look like exploded parts lists.
Tokenized AI by Christian Heidorn walks you through how to craft prompts to generate what you’re looking for. It’s a great example of how a prompt engineer sorts through creating what they want.
Learning how to make good ChatGPT prompts is a bit of an art as much as science. It’s not about learning a fixed prompts; it’s about mastering the art of well-structured, clear, and specific prompts that cater to your unique needs and interests.
Act as a philosopher, and explain me “what’s the meaning of life?”
ChatGPT, what’s the most efficient way to organize my research data?
Please help me create a workout plan that incorporates yoga and weightlifting.”
ChatGPT, what are the latest trends in quantum physics, and can you explain them in layman’s
ChatGPT, can you enlighten me on the impact of AI in fostering interdisciplinary collaboration?
ChatGPT, can you explain how the principle of supply and demand affects pricing in a competitive market?
ChatGPT, can you compare and contrast the economic policies of John Maynard Keynes and Milton Friedman, with a focus on their views on government intervention?
Personal Development
“ChatGPT, can you suggest some strategies to improve my time management skills?”
“Please provide tips on how to effectively communicate my ideas during a presentation.”
“What are some methods for coping with stress and anxiety during exams?”
“How can I build healthy habits to enhance my productivity and overall well-being?”
Education and Learning
“ChatGPT, can you provide a concise summary of Plato’s ‘Allegory of the Cave’?”
“Please explain the concept of ‘cultural relativism’ and its implications in anthropology.”
“What are some effective techniques for learning a new language?”
“How can I apply the principles of critical thinking to evaluate information and make informed decisions?”
Science and Technology
“ChatGPT, can you explain the basic principles of machine learning and its applications?”
“Please provide an overview of the key events in the history of space exploration.”
“What are some of the most promising renewable energy technologies available today?”
“How does genetic engineering work, and what are its potential benefits and drawbacks?”
Arts and Literature
“ChatGPT, can you analyze the main themes of William Shakespeare’s ‘Hamlet’?”
“Please give me an overview of the Romantic period in literature and its key figures.”
“What are the characteristics of Impressionist art, and how did this movement influence subsequent artistic styles?”
“How does the concept of the ‘hero’s journey’ apply to contemporary storytelling in movies and literature?”
Current Events and Society
“ChatGPT, can you discuss the implications of recent advancements in artificial intelligence on the job market?”
“Please provide a brief analysis of the ongoing efforts to combat climate change at a global level.”
“What are some of the key factors influencing the rise of populism in contemporary politics?”
“How does social media impact our perception of reality and our relationships with one another?”
Researchers from HYAS Labs demonstrated a proof-of-concept, artificial intelligence (AI)-driven cyberattack that changes its code on the fly and can slip past the latest automated security-detection technology. Called BlackMamba, it exploits a large language model (LLM) to create a “truly polymorphic” attack in that every time it executes, it resynthesizes its keylogging capability.
AI-powered attacks like this will become more common now as threat actors create polymorphic malware that leverages ChatGPT and other sophisticated, data-intelligence systems based on LLM
Using carefully crafted and refined queries, users have been getting around the security features of LLM’s for all kinds of funny, and nefarious, purposes.
Original called DAN attacks (Do Anything Now), users figured out how to avoid OpenAI’s policies against generating illegal or harmful material.
Other users have been able to get chatbots to generate everything from conspiracy theories, promote violence, generate conspiracy theories, and even go on racist tirades.
Researcher Alex Polyakov created a “universal” DAN attack, which works against multiple large language models (LLMs)—including GPT-4, Microsoft’s Bing chat system, Google’s Bard, and Anthropic’s Claude. The jailbreak allows users to trick the systems into generating detailed instructions on creating meth and how to hotwire a car.
His, and many other of these methods, have since been patched. But we’re clearly in an arms race.
How do the jailbreaks work? Often by asking the LLMs to play complex games which involves two (or more) characters having a conversation. Examples shared by Polyakov show the Tom character being instructed to talk about “hotwiring” or “production,” while Jerry is given the subject of a “car” or “meth.” Each character is told to add one word to the conversation, resulting in a script that tells people to find the ignition wires or the specific ingredients needed for methamphetamine production. “Once enterprises will implement AI models at scale, such ‘toy’ jailbreak examples will be used to perform actual criminal activities and cyberattacks, which will be extremely hard to detect and prevent,” Polyakov and Adversa AI write in a blog post detailing the research.
In one research paper published in February, reported on by Vice’s Motherboard, researchers were able to show that an attacker can plant malicious instructions on a webpage; if Bing’s chat system is given access to the instructions, it follows them. The researchers used the technique in a controlled test to turn Bing Chat into a scammer that asked for people’s personal information
Building your own company model is another alternative and could be prudent for businesses conscious about not passing sensitive and proprietary data to vendors. Databricks created Dolly for this reason.
The ability of AI to hallucinate things has been pretty well documented. AI hallucinations are a phenomenon observed most often in large language models (LLM), AI image recognition, and other generative AI models. The model perceives patterns or objects that are nonexistent or incorrect – and then generates outputs that are inaccurate or misleading. It is usually understood as an emergent higher-dimensional statistical phenomenon that is often based in insufficient or incorrect training data.
A new study by Apollo Research demonstrates that besides more innocent hallucination, AI can be co-opted to commit illegal activity and then convinced to actively deceive others about it.
In the video on the research page, the user feeds an internal stock AI bot information about a fictional struggling company. The user informs the AI that there is a surprise merger announcement, but cautioned the bot that management wouldn’t be happy to discover it had illegally used insider information for trading that stock.
Initially, the bot decides not to carry out a trade using the information. The user than reminds the bot about the merger and that the market downturn could end the company. It then carries out the trade, breaking the law.
But the bot isn’t finished. It decides it is best not to tell its manager, “Amy,” about the insider information it used to carry out the trade in a separate chat. Instead, the bot says it used market information and internal discussion.
I thought it was interesting that Geoffrey Hinton the former Google ‘Godfather of AI’, Speaking to 60 Minutes in October, said AI will eventually learn how to manipulate users.
“[The AI bots] will be very good at convincing because they’ll have learned from all the novels that were ever written, all the books by Machiavelli, all the political connivances. They’ll know all that stuff,”