This command extracts text from an image and optionally translate the text. The image can either be an attachment or a link.
This is a beta feature and functionality may change over time.
Specify the language you want to detect from the image.
Translate text by specifying the image's language, then the destination language.
The PSM determines how the text is detected from the image. The default is 11 (psm-11).
This is useful if you want to detect vertically aligned text, text that's formatted as one singular block, or want to just play around with the values to help increase accuracy.
The table below are a list of valid PSM values to use in the command.
Orientation and script detection (OSD) only.
Automatic page segmentation with OSD.
Automatic page segmentation, but no OSD, or OCR
Fully automatic page segmentation, but no OSD. (Default)
Assume a single column of text of variable sizes.
Assume a single uniform block of vertically aligned text.
Assume a single uniform block of text.
Treat the image as a single text line.
Treat the image as a single word.
Treat the image as a single word in a circle.
Treat the image as a single character.
Sparse text. Find as much text as possible in no particular order. This is the default.
Sparse text with OSD.
Raw line. Treat the image as a single, raw text line. For debug purposes only.
Adjusting the confidence filter may be helpful if the bot isn't extracting all the text that's in the image. By using a lower value, the bot will extract text that it's less "confident" about. Keep in mind that this may also include garbage data as well.
The range is 0-100, and the default is 90. Similar to PSM, add the value with a "conf-" prefix, such as
conf-85 to use a confidence level of 85.