Let’s Go Fishing! Writing a Minecraft 1.17 Auto-Fishing Bot in Python, OpenCV and PyAutoGUI



Original Source Here

I have been watching some of the game automation and Deep Learning/AI from Sentdex and Engineer Man on YouTube, which I wanted to learn for work (Huge fan of these guys BTW).

This is a far cry from that, but my goal is to ultimately read through tens of thousands of scanned documents and retrieve the data, which may be handwritten or typed. This means that I have to read the scan as an image, read a character, convert to a numpy array and make some decisions as to what the character is/should be. I’ll eventually be using a neural network to ‘learn’ the characters and improve the accuracy, but for now, I wanted something fun to do with the kids.

Yesterday, the new Minecraft “Caves and Cliffs” edition (1.17) was released and I thought it might be fun to write a little program that will auto-fish for me. This was purely an educational first step to write something that will be fun to write and see in action. This is not to say that reading scanned documents isn’t fun, but seeing little green XP bubbles appearing automatically is kind of cool. It’s all about the immediate gratification, right?!?

Here goes…

After a few trials and errors that I won’t bore you with, I finally settled on the following approach…

  • When the program runs, have the character ready to cast into a fishable area. Casting should be the first action by the character, which is accomplished using pyautogui.rightClick().
  • Use PIL to screengrab a small area around the cursor; i.e. if I put the cursor on the fishing bobber, take a small square screen capture around the fishing cursor.
  • Convert the image to grayscale and increase the size (i.e. zoom in) to something manageable.
  • Continue to take the grayed, zoomed screen captures every tenth of a second.
  • When a fish is caught, the fishing line, which is black, will dip below the screen capture so that there are no more black pixels in the image.
  • Once a fish is “caught”, use pyautogui.rightClick() again to pull in the catch.
  • Then, loop the entire process to pull in that sweet, sweet loot and XP!

Now for the code…please note that this is a fast and dirty program to achieve the desired result. There are many improvements that can be added, but the scope was just a proof of concept to learn image manipulation in OpenCV. This is for educational purposes only!

The four external libraries you will need are Pillow or PIL, pyautogui, OpenCV and numpy. I am running Python 3.9.1 in a virtual environment. So, get these installed:

pip install --upgrade numpy
pip install --upgrade opencv-python
pip install --upgrade Pillow
pip install --upgrade pyautogui

Saved the following file into a new project called autofish.py

import pyautogui
import cv2
from PIL import ImageGrab
from time import sleep
import numpy as np
def initializePyAutoGUI():
# Initialized PyAutoGUI
# When fail-safe mode is True
# moving the mouse to the upper-left
# corner will abort your program. This prevents
# locking the program up.
pyautogui.FAILSAFE = True
def take_capture(magnification):
mx, my = pyautogui.position() # get the mouse cursor position
x = mx - 15 # move to the left 15 pixels
y = my - 15 # move up 15 pixels
capture = ImageGrab.grab(
bbox=(x, y, x + 30, y + 30)
) # get the box down and to the right 15 pixels (from the cursor - 30 from the x, y position)
arr = np.array(capture) # convert the image to numpy array
res = cv2.cvtColor(
cv2.resize(
arr,
None,
fx=magnification,
fy=magnification,
interpolation=cv2.INTER_CUBIC
), cv2.COLOR_BGR2GRAY
) # magnify the screenshot and convert to grayscale
return res
def autofish(tick_interval, threshold, magnification):
pyautogui.rightClick() # cast the fishing line
sleep(2) # wait a couple of seconds before taking captures
img = take_capture(magnification) # take initial capture

# Continue looping to take a capture and convert and check
# until there are no black pixels in the capture. This will
# display the image, but it isn't necessary (the imshow method).
# Once there are no black pixels in the capture:
# np.sum(img == 0) is looking for black pixels
# > threshold is the number of those pixels (0)
# exit the loop and reel in the catch (pyautogui.rightClick()).
# Finally, wait a second and leave the auto-fish method.
# This will cast, wait and catch one interval. See main method
# for looping.
while np.sum(img == 0) > threshold:
img = take_capture(magnification)
sleep(tick_interval)
cv2.imshow('window', img)
if cv2.waitKey(25) & 0xFF == ord('q'):
cv2.destroyAllWindows()
break
pyautogui.rightClick()
sleep(1)
# This will wait 5 seconds to allow switching from Python program
# to Minecraft. Then loop through the autofish method for 100
# cast and catch loops.
#
# Launch Minecraft and load up your world
# Equip your fishing pole and be ready to cast into a fishable area
# Run program through IDLE or your IDE
# Switch to the Minecraft while running
# Position character so that it is ready to cast
# and the cursor will be immediately on top of the bobber
# Let it run...
# If you need more time, change sleep(5) to something more
def main():
initializePyAutoGUI()
sleep(5)
i = 0
while i < 100:
autofish(0.01, 0, 5)
i += 1


if __name__ == "__main__":
main()

As I said before, which I can’t disclaim enough, the purpose of this program is to learn the basics of image capture, manipulation and reading for making decisions programmatically in a simple and fun way.

I spent a long time comparing captures and detecting differences, but the bubbles made this impossible to code something consistent. I finally settled on this approach when I zoomed in and looked at the line in the cursor capture.

Let me know if you have suggestions or improvements. I would love to hear your thoughts.

Happy coding!

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: